Test Report: KVM_Linux_crio 19868

                    
                      7e440490692625b78ba9b7da2770c31edaec7633:2024-10-26:36808
                    
                

Test fail (31/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.44
38 TestAddons/parallel/MetricsServer 347.01
47 TestAddons/StoppedEnableDisable 154.34
166 TestMultiControlPlane/serial/StopSecondaryNode 141.39
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.51
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.28
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 415.77
173 TestMultiControlPlane/serial/StopCluster 142.09
233 TestMultiNode/serial/RestartKeepsNodes 325.32
235 TestMultiNode/serial/StopMultiNode 145.34
242 TestPreload 269.66
250 TestKubernetesUpgrade 1175.68
285 TestStartStop/group/old-k8s-version/serial/FirstStart 274.79
300 TestStartStop/group/no-preload/serial/Stop 139.18
303 TestStartStop/group/embed-certs/serial/Stop 139.01
304 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 80.29
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
312 TestStartStop/group/old-k8s-version/serial/SecondStart 751.74
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 541.89
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.98
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.06
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.38
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 485.39
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 369.61
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 147.27
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.09
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 436.59
x
+
TestAddons/parallel/Ingress (153.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-602145 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-602145 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-602145 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e5facde9-7465-4490-b87c-c7f93997b01b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e5facde9-7465-4490-b87c-c7f93997b01b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00380322s
I1026 00:47:24.900235   17615 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-602145 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.090485324s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-602145 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.207
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-602145 -n addons-602145
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 logs -n 25: (1.122586387s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-798188                                                                     | download-only-798188 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-699862                                                                     | download-only-699862 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-798188                                                                     | download-only-798188 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-422612 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | binary-mirror-422612                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37063                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-422612                                                                     | binary-mirror-422612 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| addons  | enable dashboard -p                                                                         | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | addons-602145                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | addons-602145                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-602145 --wait=true                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | -p addons-602145                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-602145 ip                                                                            | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-602145 ssh curl -s                                                                   | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-602145 ssh cat                                                                       | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | /opt/local-path-provisioner/pvc-323584fd-5eeb-4dce-983c-67e6333a4dfe_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:48 UTC | 26 Oct 24 00:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:48 UTC | 26 Oct 24 00:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-602145 ip                                                                            | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:49 UTC | 26 Oct 24 00:49 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:55.614406   18362 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:55.614530   18362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:55.614539   18362 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:55.614544   18362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:55.614714   18362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:43:55.615270   18362 out.go:352] Setting JSON to false
	I1026 00:43:55.616067   18362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1576,"bootTime":1729901860,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:43:55.616123   18362 start.go:139] virtualization: kvm guest
	I1026 00:43:55.617880   18362 out.go:177] * [addons-602145] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:43:55.619108   18362 notify.go:220] Checking for updates...
	I1026 00:43:55.619121   18362 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:43:55.620411   18362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:55.621634   18362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:43:55.622772   18362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:55.623847   18362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:43:55.625354   18362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:43:55.626552   18362 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:55.657051   18362 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:43:55.658151   18362 start.go:297] selected driver: kvm2
	I1026 00:43:55.658164   18362 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:43:55.658176   18362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:43:55.659096   18362 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:55.659181   18362 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:43:55.674226   18362 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:43:55.674278   18362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:55.674580   18362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:43:55.674612   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:43:55.674697   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:43:55.674709   18362 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:55.674775   18362 start.go:340] cluster config:
	{Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:55.674947   18362 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:55.676738   18362 out.go:177] * Starting "addons-602145" primary control-plane node in "addons-602145" cluster
	I1026 00:43:55.677910   18362 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:43:55.677939   18362 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:43:55.677952   18362 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:55.678018   18362 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:43:55.678029   18362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:43:55.678335   18362 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json ...
	I1026 00:43:55.678356   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json: {Name:mk8d11eb76abf3e32b46f47b73cd48b347338ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:43:55.678473   18362 start.go:360] acquireMachinesLock for addons-602145: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:43:55.678513   18362 start.go:364] duration metric: took 29.027µs to acquireMachinesLock for "addons-602145"
	I1026 00:43:55.678529   18362 start.go:93] Provisioning new machine with config: &{Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:43:55.678580   18362 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:43:55.680197   18362 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1026 00:43:55.680311   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:43:55.680351   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:43:55.694416   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1026 00:43:55.694791   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:43:55.695295   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:43:55.695315   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:43:55.695693   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:43:55.695868   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:43:55.696001   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:43:55.696160   18362 start.go:159] libmachine.API.Create for "addons-602145" (driver="kvm2")
	I1026 00:43:55.696200   18362 client.go:168] LocalClient.Create starting
	I1026 00:43:55.696248   18362 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:43:55.815059   18362 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:43:55.950771   18362 main.go:141] libmachine: Running pre-create checks...
	I1026 00:43:55.950795   18362 main.go:141] libmachine: (addons-602145) Calling .PreCreateCheck
	I1026 00:43:55.951337   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:43:55.951765   18362 main.go:141] libmachine: Creating machine...
	I1026 00:43:55.951779   18362 main.go:141] libmachine: (addons-602145) Calling .Create
	I1026 00:43:55.951920   18362 main.go:141] libmachine: (addons-602145) Creating KVM machine...
	I1026 00:43:55.953140   18362 main.go:141] libmachine: (addons-602145) DBG | found existing default KVM network
	I1026 00:43:55.953854   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:55.953704   18383 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a40}
	I1026 00:43:55.953898   18362 main.go:141] libmachine: (addons-602145) DBG | created network xml: 
	I1026 00:43:55.953922   18362 main.go:141] libmachine: (addons-602145) DBG | <network>
	I1026 00:43:55.953935   18362 main.go:141] libmachine: (addons-602145) DBG |   <name>mk-addons-602145</name>
	I1026 00:43:55.953948   18362 main.go:141] libmachine: (addons-602145) DBG |   <dns enable='no'/>
	I1026 00:43:55.953957   18362 main.go:141] libmachine: (addons-602145) DBG |   
	I1026 00:43:55.953972   18362 main.go:141] libmachine: (addons-602145) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:43:55.954003   18362 main.go:141] libmachine: (addons-602145) DBG |     <dhcp>
	I1026 00:43:55.954029   18362 main.go:141] libmachine: (addons-602145) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:43:55.954041   18362 main.go:141] libmachine: (addons-602145) DBG |     </dhcp>
	I1026 00:43:55.954050   18362 main.go:141] libmachine: (addons-602145) DBG |   </ip>
	I1026 00:43:55.954059   18362 main.go:141] libmachine: (addons-602145) DBG |   
	I1026 00:43:55.954067   18362 main.go:141] libmachine: (addons-602145) DBG | </network>
	I1026 00:43:55.954081   18362 main.go:141] libmachine: (addons-602145) DBG | 
	I1026 00:43:55.959369   18362 main.go:141] libmachine: (addons-602145) DBG | trying to create private KVM network mk-addons-602145 192.168.39.0/24...
	I1026 00:43:56.022338   18362 main.go:141] libmachine: (addons-602145) DBG | private KVM network mk-addons-602145 192.168.39.0/24 created
	I1026 00:43:56.022368   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.022296   18383 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:56.022387   18362 main.go:141] libmachine: (addons-602145) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 ...
	I1026 00:43:56.022407   18362 main.go:141] libmachine: (addons-602145) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:43:56.022465   18362 main.go:141] libmachine: (addons-602145) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:43:56.286340   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.286214   18383 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa...
	I1026 00:43:56.501719   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.501588   18383 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/addons-602145.rawdisk...
	I1026 00:43:56.501745   18362 main.go:141] libmachine: (addons-602145) DBG | Writing magic tar header
	I1026 00:43:56.501754   18362 main.go:141] libmachine: (addons-602145) DBG | Writing SSH key tar header
	I1026 00:43:56.501761   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.501706   18383 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 ...
	I1026 00:43:56.501851   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145
	I1026 00:43:56.501878   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:43:56.501894   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 (perms=drwx------)
	I1026 00:43:56.501905   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:56.501915   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:43:56.501924   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:43:56.501947   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:43:56.501960   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:43:56.501970   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:43:56.501979   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:43:56.501992   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:43:56.502000   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home
	I1026 00:43:56.502015   18362 main.go:141] libmachine: (addons-602145) DBG | Skipping /home - not owner
	I1026 00:43:56.502024   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:43:56.502028   18362 main.go:141] libmachine: (addons-602145) Creating domain...
	I1026 00:43:56.503084   18362 main.go:141] libmachine: (addons-602145) define libvirt domain using xml: 
	I1026 00:43:56.503109   18362 main.go:141] libmachine: (addons-602145) <domain type='kvm'>
	I1026 00:43:56.503124   18362 main.go:141] libmachine: (addons-602145)   <name>addons-602145</name>
	I1026 00:43:56.503140   18362 main.go:141] libmachine: (addons-602145)   <memory unit='MiB'>4000</memory>
	I1026 00:43:56.503150   18362 main.go:141] libmachine: (addons-602145)   <vcpu>2</vcpu>
	I1026 00:43:56.503159   18362 main.go:141] libmachine: (addons-602145)   <features>
	I1026 00:43:56.503178   18362 main.go:141] libmachine: (addons-602145)     <acpi/>
	I1026 00:43:56.503199   18362 main.go:141] libmachine: (addons-602145)     <apic/>
	I1026 00:43:56.503216   18362 main.go:141] libmachine: (addons-602145)     <pae/>
	I1026 00:43:56.503234   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503247   18362 main.go:141] libmachine: (addons-602145)   </features>
	I1026 00:43:56.503257   18362 main.go:141] libmachine: (addons-602145)   <cpu mode='host-passthrough'>
	I1026 00:43:56.503266   18362 main.go:141] libmachine: (addons-602145)   
	I1026 00:43:56.503276   18362 main.go:141] libmachine: (addons-602145)   </cpu>
	I1026 00:43:56.503286   18362 main.go:141] libmachine: (addons-602145)   <os>
	I1026 00:43:56.503295   18362 main.go:141] libmachine: (addons-602145)     <type>hvm</type>
	I1026 00:43:56.503306   18362 main.go:141] libmachine: (addons-602145)     <boot dev='cdrom'/>
	I1026 00:43:56.503319   18362 main.go:141] libmachine: (addons-602145)     <boot dev='hd'/>
	I1026 00:43:56.503334   18362 main.go:141] libmachine: (addons-602145)     <bootmenu enable='no'/>
	I1026 00:43:56.503348   18362 main.go:141] libmachine: (addons-602145)   </os>
	I1026 00:43:56.503357   18362 main.go:141] libmachine: (addons-602145)   <devices>
	I1026 00:43:56.503362   18362 main.go:141] libmachine: (addons-602145)     <disk type='file' device='cdrom'>
	I1026 00:43:56.503382   18362 main.go:141] libmachine: (addons-602145)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/boot2docker.iso'/>
	I1026 00:43:56.503390   18362 main.go:141] libmachine: (addons-602145)       <target dev='hdc' bus='scsi'/>
	I1026 00:43:56.503395   18362 main.go:141] libmachine: (addons-602145)       <readonly/>
	I1026 00:43:56.503401   18362 main.go:141] libmachine: (addons-602145)     </disk>
	I1026 00:43:56.503410   18362 main.go:141] libmachine: (addons-602145)     <disk type='file' device='disk'>
	I1026 00:43:56.503421   18362 main.go:141] libmachine: (addons-602145)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:43:56.503440   18362 main.go:141] libmachine: (addons-602145)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/addons-602145.rawdisk'/>
	I1026 00:43:56.503457   18362 main.go:141] libmachine: (addons-602145)       <target dev='hda' bus='virtio'/>
	I1026 00:43:56.503471   18362 main.go:141] libmachine: (addons-602145)     </disk>
	I1026 00:43:56.503483   18362 main.go:141] libmachine: (addons-602145)     <interface type='network'>
	I1026 00:43:56.503495   18362 main.go:141] libmachine: (addons-602145)       <source network='mk-addons-602145'/>
	I1026 00:43:56.503505   18362 main.go:141] libmachine: (addons-602145)       <model type='virtio'/>
	I1026 00:43:56.503515   18362 main.go:141] libmachine: (addons-602145)     </interface>
	I1026 00:43:56.503525   18362 main.go:141] libmachine: (addons-602145)     <interface type='network'>
	I1026 00:43:56.503542   18362 main.go:141] libmachine: (addons-602145)       <source network='default'/>
	I1026 00:43:56.503557   18362 main.go:141] libmachine: (addons-602145)       <model type='virtio'/>
	I1026 00:43:56.503569   18362 main.go:141] libmachine: (addons-602145)     </interface>
	I1026 00:43:56.503578   18362 main.go:141] libmachine: (addons-602145)     <serial type='pty'>
	I1026 00:43:56.503589   18362 main.go:141] libmachine: (addons-602145)       <target port='0'/>
	I1026 00:43:56.503596   18362 main.go:141] libmachine: (addons-602145)     </serial>
	I1026 00:43:56.503602   18362 main.go:141] libmachine: (addons-602145)     <console type='pty'>
	I1026 00:43:56.503618   18362 main.go:141] libmachine: (addons-602145)       <target type='serial' port='0'/>
	I1026 00:43:56.503628   18362 main.go:141] libmachine: (addons-602145)     </console>
	I1026 00:43:56.503637   18362 main.go:141] libmachine: (addons-602145)     <rng model='virtio'>
	I1026 00:43:56.503653   18362 main.go:141] libmachine: (addons-602145)       <backend model='random'>/dev/random</backend>
	I1026 00:43:56.503678   18362 main.go:141] libmachine: (addons-602145)     </rng>
	I1026 00:43:56.503693   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503701   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503706   18362 main.go:141] libmachine: (addons-602145)   </devices>
	I1026 00:43:56.503714   18362 main.go:141] libmachine: (addons-602145) </domain>
	I1026 00:43:56.503723   18362 main.go:141] libmachine: (addons-602145) 
	I1026 00:43:56.509222   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c9:0b:50 in network default
	I1026 00:43:56.509751   18362 main.go:141] libmachine: (addons-602145) Ensuring networks are active...
	I1026 00:43:56.509768   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:56.510402   18362 main.go:141] libmachine: (addons-602145) Ensuring network default is active
	I1026 00:43:56.510731   18362 main.go:141] libmachine: (addons-602145) Ensuring network mk-addons-602145 is active
	I1026 00:43:56.511210   18362 main.go:141] libmachine: (addons-602145) Getting domain xml...
	I1026 00:43:56.511787   18362 main.go:141] libmachine: (addons-602145) Creating domain...
	I1026 00:43:57.889883   18362 main.go:141] libmachine: (addons-602145) Waiting to get IP...
	I1026 00:43:57.890736   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:57.891169   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:57.891232   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:57.891176   18383 retry.go:31] will retry after 198.139157ms: waiting for machine to come up
	I1026 00:43:58.090417   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.090774   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.090812   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.090764   18383 retry.go:31] will retry after 324.888481ms: waiting for machine to come up
	I1026 00:43:58.417469   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.417887   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.417928   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.417842   18383 retry.go:31] will retry after 294.424781ms: waiting for machine to come up
	I1026 00:43:58.714356   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.714746   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.714775   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.714706   18383 retry.go:31] will retry after 519.90861ms: waiting for machine to come up
	I1026 00:43:59.236542   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:59.236895   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:59.236929   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:59.236866   18383 retry.go:31] will retry after 592.882017ms: waiting for machine to come up
	I1026 00:43:59.831579   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:59.832004   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:59.832026   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:59.831957   18383 retry.go:31] will retry after 902.357908ms: waiting for machine to come up
	I1026 00:44:00.735715   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:00.736126   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:00.736149   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:00.736091   18383 retry.go:31] will retry after 1.1727963s: waiting for machine to come up
	I1026 00:44:01.910538   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:01.911001   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:01.911029   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:01.910950   18383 retry.go:31] will retry after 1.229780318s: waiting for machine to come up
	I1026 00:44:03.142273   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:03.142619   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:03.142646   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:03.142555   18383 retry.go:31] will retry after 1.794501043s: waiting for machine to come up
	I1026 00:44:04.939417   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:04.939681   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:04.939704   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:04.939638   18383 retry.go:31] will retry after 1.740655734s: waiting for machine to come up
	I1026 00:44:06.681963   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:06.682436   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:06.682461   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:06.682396   18383 retry.go:31] will retry after 2.565591967s: waiting for machine to come up
	I1026 00:44:09.251163   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:09.251533   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:09.251556   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:09.251499   18383 retry.go:31] will retry after 3.368747645s: waiting for machine to come up
	I1026 00:44:12.622506   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:12.622788   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:12.622817   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:12.622743   18383 retry.go:31] will retry after 3.25115137s: waiting for machine to come up
	I1026 00:44:15.875930   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.876313   18362 main.go:141] libmachine: (addons-602145) Found IP for machine: 192.168.39.207
	I1026 00:44:15.876352   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has current primary IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.876361   18362 main.go:141] libmachine: (addons-602145) Reserving static IP address...
	I1026 00:44:15.876690   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find host DHCP lease matching {name: "addons-602145", mac: "52:54:00:c1:12:e0", ip: "192.168.39.207"} in network mk-addons-602145
	I1026 00:44:15.946580   18362 main.go:141] libmachine: (addons-602145) Reserved static IP address: 192.168.39.207
	I1026 00:44:15.946617   18362 main.go:141] libmachine: (addons-602145) Waiting for SSH to be available...
	I1026 00:44:15.946626   18362 main.go:141] libmachine: (addons-602145) DBG | Getting to WaitForSSH function...
	I1026 00:44:15.949198   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.949664   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:15.949694   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.949919   18362 main.go:141] libmachine: (addons-602145) DBG | Using SSH client type: external
	I1026 00:44:15.949932   18362 main.go:141] libmachine: (addons-602145) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa (-rw-------)
	I1026 00:44:15.949968   18362 main.go:141] libmachine: (addons-602145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 00:44:15.949990   18362 main.go:141] libmachine: (addons-602145) DBG | About to run SSH command:
	I1026 00:44:15.950001   18362 main.go:141] libmachine: (addons-602145) DBG | exit 0
	I1026 00:44:16.077239   18362 main.go:141] libmachine: (addons-602145) DBG | SSH cmd err, output: <nil>: 
	I1026 00:44:16.077586   18362 main.go:141] libmachine: (addons-602145) KVM machine creation complete!
	I1026 00:44:16.077868   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:44:16.078412   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:16.078561   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:16.078688   18362 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 00:44:16.078705   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:16.079985   18362 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 00:44:16.079998   18362 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 00:44:16.080002   18362 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 00:44:16.080008   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.082144   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.082451   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.082471   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.082599   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.082780   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.082930   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.083044   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.083182   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.083354   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.083363   18362 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 00:44:16.180435   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:44:16.180459   18362 main.go:141] libmachine: Detecting the provisioner...
	I1026 00:44:16.180466   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.183346   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.183683   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.183725   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.183875   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.184062   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.184220   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.184359   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.184479   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.184680   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.184692   18362 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 00:44:16.281574   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 00:44:16.281693   18362 main.go:141] libmachine: found compatible host: buildroot
	I1026 00:44:16.281708   18362 main.go:141] libmachine: Provisioning with buildroot...
	I1026 00:44:16.281718   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.281944   18362 buildroot.go:166] provisioning hostname "addons-602145"
	I1026 00:44:16.281973   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.282147   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.284487   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.284809   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.284828   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.284943   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.285111   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.285247   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.285371   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.285551   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.285723   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.285735   18362 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-602145 && echo "addons-602145" | sudo tee /etc/hostname
	I1026 00:44:16.400619   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-602145
	
	I1026 00:44:16.400650   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.403067   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.403376   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.403412   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.403537   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.403705   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.403866   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.403961   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.404102   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.404260   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.404274   18362 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-602145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-602145/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-602145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 00:44:16.509123   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:44:16.509157   18362 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 00:44:16.509175   18362 buildroot.go:174] setting up certificates
	I1026 00:44:16.509185   18362 provision.go:84] configureAuth start
	I1026 00:44:16.509193   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.509480   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:16.511898   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.512164   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.512192   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.512296   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.514231   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.514585   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.514612   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.514711   18362 provision.go:143] copyHostCerts
	I1026 00:44:16.514800   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 00:44:16.514918   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 00:44:16.514999   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 00:44:16.515065   18362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.addons-602145 san=[127.0.0.1 192.168.39.207 addons-602145 localhost minikube]
	I1026 00:44:16.681734   18362 provision.go:177] copyRemoteCerts
	I1026 00:44:16.681795   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 00:44:16.681816   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.684306   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.684602   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.684623   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.684844   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.685039   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.685186   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.685286   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:16.762584   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 00:44:16.784173   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 00:44:16.805183   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 00:44:16.826070   18362 provision.go:87] duration metric: took 316.87402ms to configureAuth
	I1026 00:44:16.826101   18362 buildroot.go:189] setting minikube options for container-runtime
	I1026 00:44:16.826293   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:16.826378   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.828731   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.829026   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.829046   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.829208   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.829365   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.829500   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.829611   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.829743   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.829935   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.829952   18362 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 00:44:17.044788   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 00:44:17.044815   18362 main.go:141] libmachine: Checking connection to Docker...
	I1026 00:44:17.044822   18362 main.go:141] libmachine: (addons-602145) Calling .GetURL
	I1026 00:44:17.046228   18362 main.go:141] libmachine: (addons-602145) DBG | Using libvirt version 6000000
	I1026 00:44:17.048406   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.048743   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.048771   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.048897   18362 main.go:141] libmachine: Docker is up and running!
	I1026 00:44:17.048909   18362 main.go:141] libmachine: Reticulating splines...
	I1026 00:44:17.048915   18362 client.go:171] duration metric: took 21.35270457s to LocalClient.Create
	I1026 00:44:17.048936   18362 start.go:167] duration metric: took 21.352777514s to libmachine.API.Create "addons-602145"
	I1026 00:44:17.048950   18362 start.go:293] postStartSetup for "addons-602145" (driver="kvm2")
	I1026 00:44:17.048962   18362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 00:44:17.048978   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.049178   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 00:44:17.049206   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.051103   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.051466   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.051491   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.051603   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.051758   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.051878   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.051983   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.130951   18362 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 00:44:17.134727   18362 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 00:44:17.134753   18362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 00:44:17.134824   18362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 00:44:17.134847   18362 start.go:296] duration metric: took 85.889764ms for postStartSetup
	I1026 00:44:17.134876   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:44:17.135429   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:17.137786   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.138127   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.138153   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.138350   18362 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json ...
	I1026 00:44:17.138517   18362 start.go:128] duration metric: took 21.45992765s to createHost
	I1026 00:44:17.138537   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.140745   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.141024   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.141064   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.141220   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.141371   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.141528   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.141641   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.141775   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:17.141968   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:17.141978   18362 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 00:44:17.241636   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729903457.215519488
	
	I1026 00:44:17.241658   18362 fix.go:216] guest clock: 1729903457.215519488
	I1026 00:44:17.241665   18362 fix.go:229] Guest: 2024-10-26 00:44:17.215519488 +0000 UTC Remote: 2024-10-26 00:44:17.138527799 +0000 UTC m=+21.559650378 (delta=76.991689ms)
	I1026 00:44:17.241694   18362 fix.go:200] guest clock delta is within tolerance: 76.991689ms
	I1026 00:44:17.241699   18362 start.go:83] releasing machines lock for "addons-602145", held for 21.563176612s
	I1026 00:44:17.241717   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.241948   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:17.244474   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.244802   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.244828   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.244956   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245372   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245651   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245741   18362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 00:44:17.245787   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.245857   18362 ssh_runner.go:195] Run: cat /version.json
	I1026 00:44:17.245868   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.248370   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248552   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248690   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.248711   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248878   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.248893   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.248915   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.249035   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.249081   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.249158   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.249271   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.249274   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.249391   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.249524   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.371475   18362 ssh_runner.go:195] Run: systemctl --version
	I1026 00:44:17.377239   18362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 00:44:17.533030   18362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 00:44:17.539110   18362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 00:44:17.539170   18362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:44:17.556792   18362 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 00:44:17.556816   18362 start.go:495] detecting cgroup driver to use...
	I1026 00:44:17.556879   18362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 00:44:17.571840   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 00:44:17.585193   18362 docker.go:217] disabling cri-docker service (if available) ...
	I1026 00:44:17.585244   18362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 00:44:17.598450   18362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 00:44:17.611348   18362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 00:44:17.724975   18362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 00:44:17.860562   18362 docker.go:233] disabling docker service ...
	I1026 00:44:17.860624   18362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 00:44:17.878417   18362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 00:44:17.890621   18362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 00:44:18.027576   18362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 00:44:18.152826   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 00:44:18.165246   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 00:44:18.181792   18362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 00:44:18.181843   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.191166   18362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 00:44:18.191229   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.200643   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.210120   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.219499   18362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 00:44:18.229225   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.238769   18362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.254338   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.263553   18362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 00:44:18.271947   18362 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 00:44:18.271998   18362 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 00:44:18.283179   18362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 00:44:18.291951   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:18.411944   18362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 00:44:18.500474   18362 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 00:44:18.500561   18362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 00:44:18.505361   18362 start.go:563] Will wait 60s for crictl version
	I1026 00:44:18.505435   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:44:18.508746   18362 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 00:44:18.544203   18362 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 00:44:18.544314   18362 ssh_runner.go:195] Run: crio --version
	I1026 00:44:18.569896   18362 ssh_runner.go:195] Run: crio --version
	I1026 00:44:18.597852   18362 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 00:44:18.599187   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:18.602535   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:18.602978   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:18.603007   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:18.603209   18362 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 00:44:18.606878   18362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:44:18.618164   18362 kubeadm.go:883] updating cluster {Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 00:44:18.618259   18362 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:44:18.618302   18362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:44:18.647501   18362 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 00:44:18.647556   18362 ssh_runner.go:195] Run: which lz4
	I1026 00:44:18.650977   18362 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 00:44:18.654650   18362 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 00:44:18.654688   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 00:44:19.704577   18362 crio.go:462] duration metric: took 1.05362861s to copy over tarball
	I1026 00:44:19.704656   18362 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 00:44:21.744004   18362 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039313463s)
	I1026 00:44:21.744029   18362 crio.go:469] duration metric: took 2.039426425s to extract the tarball
	I1026 00:44:21.744036   18362 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 00:44:21.779704   18362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:44:21.823505   18362 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 00:44:21.823530   18362 cache_images.go:84] Images are preloaded, skipping loading
	I1026 00:44:21.823539   18362 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.31.2 crio true true} ...
	I1026 00:44:21.823638   18362 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-602145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 00:44:21.823701   18362 ssh_runner.go:195] Run: crio config
	I1026 00:44:21.863753   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:44:21.863774   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:44:21.863785   18362 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 00:44:21.863806   18362 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-602145 NodeName:addons-602145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 00:44:21.863906   18362 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-602145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.207"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 00:44:21.863970   18362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 00:44:21.873123   18362 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 00:44:21.873181   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 00:44:21.881926   18362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 00:44:21.897049   18362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 00:44:21.911620   18362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 00:44:21.926348   18362 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I1026 00:44:21.929745   18362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:44:21.940587   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:22.050090   18362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 00:44:22.065281   18362 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145 for IP: 192.168.39.207
	I1026 00:44:22.065311   18362 certs.go:194] generating shared ca certs ...
	I1026 00:44:22.065330   18362 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.065512   18362 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 00:44:22.237379   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt ...
	I1026 00:44:22.237412   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt: {Name:mk3c127015e37380407dc6638ce54fc88c77b493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.237591   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key ...
	I1026 00:44:22.237601   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key: {Name:mk7de4df9acb036a6d7b414631e09603baf60c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.237672   18362 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 00:44:22.310306   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt ...
	I1026 00:44:22.310332   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt: {Name:mk9e3186936c323000cec16bc2f982aa6ac345e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.310472   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key ...
	I1026 00:44:22.310483   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key: {Name:mk12a79b4c0d797bf5c5e676c0e8da6a87984c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.310547   18362 certs.go:256] generating profile certs ...
	I1026 00:44:22.310594   18362 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key
	I1026 00:44:22.310609   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt with IP's: []
	I1026 00:44:22.414269   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt ...
	I1026 00:44:22.414299   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: {Name:mk59642db8b1e44c55a4b368b376e78b938381d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.414454   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key ...
	I1026 00:44:22.414464   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key: {Name:mk8af73069dc8211099d6ba14c77d7dc56b20e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.414530   18362 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52
	I1026 00:44:22.414547   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.207]
	I1026 00:44:22.522754   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 ...
	I1026 00:44:22.522786   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52: {Name:mk1be9ecb2bf9b4a0cde6cb7c2493e966bffd8f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.522931   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52 ...
	I1026 00:44:22.522942   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52: {Name:mk0b09375294e59642b26e78c66ddf8850b79512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.523030   18362 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt
	I1026 00:44:22.523109   18362 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key
	I1026 00:44:22.523157   18362 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key
	I1026 00:44:22.523173   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt with IP's: []
	I1026 00:44:22.799300   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt ...
	I1026 00:44:22.799330   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt: {Name:mk183b421d2ac65e5dd1715a5fb93c0771ff3857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.799484   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key ...
	I1026 00:44:22.799494   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key: {Name:mk8c565fbfea03d35d5b91237c40613d8e56f3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.799648   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 00:44:22.799685   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 00:44:22.799709   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 00:44:22.799732   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 00:44:22.800321   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 00:44:22.828102   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 00:44:22.863802   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 00:44:22.885129   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 00:44:22.905574   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 00:44:22.926209   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 00:44:22.946806   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 00:44:22.967616   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 00:44:22.988355   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 00:44:23.008992   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 00:44:23.023720   18362 ssh_runner.go:195] Run: openssl version
	I1026 00:44:23.029273   18362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 00:44:23.039291   18362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.043401   18362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.043460   18362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.048759   18362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 00:44:23.058732   18362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 00:44:23.062382   18362 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 00:44:23.062436   18362 kubeadm.go:392] StartCluster: {Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:44:23.062512   18362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 00:44:23.062556   18362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 00:44:23.095758   18362 cri.go:89] found id: ""
	I1026 00:44:23.095826   18362 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 00:44:23.104849   18362 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 00:44:23.113558   18362 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 00:44:23.122035   18362 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 00:44:23.122054   18362 kubeadm.go:157] found existing configuration files:
	
	I1026 00:44:23.122100   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 00:44:23.130298   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 00:44:23.130363   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 00:44:23.139045   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 00:44:23.147045   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 00:44:23.147092   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 00:44:23.155362   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 00:44:23.163233   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 00:44:23.163280   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 00:44:23.171864   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 00:44:23.180038   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 00:44:23.180106   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 00:44:23.188383   18362 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 00:44:23.331565   18362 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 00:44:32.692523   18362 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 00:44:32.692576   18362 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 00:44:32.692701   18362 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 00:44:32.692843   18362 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 00:44:32.692931   18362 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 00:44:32.692984   18362 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 00:44:32.694199   18362 out.go:235]   - Generating certificates and keys ...
	I1026 00:44:32.694278   18362 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 00:44:32.694371   18362 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 00:44:32.694467   18362 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 00:44:32.694556   18362 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 00:44:32.694636   18362 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 00:44:32.694718   18362 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 00:44:32.694802   18362 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 00:44:32.694954   18362 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-602145 localhost] and IPs [192.168.39.207 127.0.0.1 ::1]
	I1026 00:44:32.695025   18362 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 00:44:32.695173   18362 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-602145 localhost] and IPs [192.168.39.207 127.0.0.1 ::1]
	I1026 00:44:32.695264   18362 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 00:44:32.695365   18362 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 00:44:32.695432   18362 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 00:44:32.695513   18362 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 00:44:32.695586   18362 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 00:44:32.695665   18362 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 00:44:32.695760   18362 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 00:44:32.695819   18362 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 00:44:32.695866   18362 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 00:44:32.695948   18362 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 00:44:32.696047   18362 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 00:44:32.697241   18362 out.go:235]   - Booting up control plane ...
	I1026 00:44:32.697352   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 00:44:32.697466   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 00:44:32.697526   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 00:44:32.697612   18362 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 00:44:32.697690   18362 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 00:44:32.697734   18362 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 00:44:32.697860   18362 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 00:44:32.697980   18362 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 00:44:32.698076   18362 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.91963ms
	I1026 00:44:32.698182   18362 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 00:44:32.698259   18362 kubeadm.go:310] [api-check] The API server is healthy after 5.501627653s
	I1026 00:44:32.698391   18362 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 00:44:32.698557   18362 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 00:44:32.698644   18362 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 00:44:32.698905   18362 kubeadm.go:310] [mark-control-plane] Marking the node addons-602145 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 00:44:32.698998   18362 kubeadm.go:310] [bootstrap-token] Using token: i9uyyo.fe8oo1yr6slh6qor
	I1026 00:44:32.700913   18362 out.go:235]   - Configuring RBAC rules ...
	I1026 00:44:32.701006   18362 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 00:44:32.701076   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 00:44:32.701207   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 00:44:32.701343   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 00:44:32.701524   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 00:44:32.701633   18362 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 00:44:32.701781   18362 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 00:44:32.701850   18362 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 00:44:32.701896   18362 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 00:44:32.701902   18362 kubeadm.go:310] 
	I1026 00:44:32.701950   18362 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 00:44:32.701955   18362 kubeadm.go:310] 
	I1026 00:44:32.702044   18362 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 00:44:32.702053   18362 kubeadm.go:310] 
	I1026 00:44:32.702077   18362 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 00:44:32.702143   18362 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 00:44:32.702199   18362 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 00:44:32.702208   18362 kubeadm.go:310] 
	I1026 00:44:32.702258   18362 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 00:44:32.702264   18362 kubeadm.go:310] 
	I1026 00:44:32.702325   18362 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 00:44:32.702334   18362 kubeadm.go:310] 
	I1026 00:44:32.702379   18362 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 00:44:32.702449   18362 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 00:44:32.702519   18362 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 00:44:32.702527   18362 kubeadm.go:310] 
	I1026 00:44:32.702649   18362 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 00:44:32.702780   18362 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 00:44:32.702788   18362 kubeadm.go:310] 
	I1026 00:44:32.702897   18362 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9uyyo.fe8oo1yr6slh6qor \
	I1026 00:44:32.703034   18362 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 00:44:32.703059   18362 kubeadm.go:310] 	--control-plane 
	I1026 00:44:32.703064   18362 kubeadm.go:310] 
	I1026 00:44:32.703156   18362 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 00:44:32.703173   18362 kubeadm.go:310] 
	I1026 00:44:32.703301   18362 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9uyyo.fe8oo1yr6slh6qor \
	I1026 00:44:32.703442   18362 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 00:44:32.703458   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:44:32.703470   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:44:32.705094   18362 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 00:44:32.706290   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 00:44:32.718683   18362 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 00:44:32.736359   18362 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 00:44:32.736424   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:32.736425   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-602145 minikube.k8s.io/updated_at=2024_10_26T00_44_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=addons-602145 minikube.k8s.io/primary=true
	I1026 00:44:32.903007   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:32.903035   18362 ops.go:34] apiserver oom_adj: -16
	I1026 00:44:33.403464   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:33.903212   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:34.403819   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:34.903467   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:35.403131   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:35.903901   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:36.404022   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:36.904019   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:37.403782   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:37.524160   18362 kubeadm.go:1113] duration metric: took 4.787795845s to wait for elevateKubeSystemPrivileges
	I1026 00:44:37.524192   18362 kubeadm.go:394] duration metric: took 14.461759067s to StartCluster
	I1026 00:44:37.524212   18362 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:37.524331   18362 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:44:37.524758   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:37.524984   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 00:44:37.524988   18362 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:44:37.525093   18362 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 00:44:37.525180   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:37.525218   18362 addons.go:69] Setting yakd=true in profile "addons-602145"
	I1026 00:44:37.525229   18362 addons.go:69] Setting gcp-auth=true in profile "addons-602145"
	I1026 00:44:37.525244   18362 addons.go:234] Setting addon yakd=true in "addons-602145"
	I1026 00:44:37.525255   18362 addons.go:69] Setting ingress-dns=true in profile "addons-602145"
	I1026 00:44:37.525258   18362 addons.go:69] Setting cloud-spanner=true in profile "addons-602145"
	I1026 00:44:37.525268   18362 addons.go:69] Setting storage-provisioner=true in profile "addons-602145"
	I1026 00:44:37.525276   18362 addons.go:234] Setting addon ingress-dns=true in "addons-602145"
	I1026 00:44:37.525280   18362 addons.go:234] Setting addon cloud-spanner=true in "addons-602145"
	I1026 00:44:37.525285   18362 addons.go:234] Setting addon storage-provisioner=true in "addons-602145"
	I1026 00:44:37.525279   18362 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-602145"
	I1026 00:44:37.525300   18362 addons.go:69] Setting volcano=true in profile "addons-602145"
	I1026 00:44:37.525310   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525316   18362 addons.go:69] Setting metrics-server=true in profile "addons-602145"
	I1026 00:44:37.525318   18362 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-602145"
	I1026 00:44:37.525325   18362 addons.go:69] Setting registry=true in profile "addons-602145"
	I1026 00:44:37.525328   18362 addons.go:234] Setting addon metrics-server=true in "addons-602145"
	I1026 00:44:37.525338   18362 addons.go:234] Setting addon registry=true in "addons-602145"
	I1026 00:44:37.525346   18362 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-602145"
	I1026 00:44:37.525238   18362 addons.go:69] Setting default-storageclass=true in profile "addons-602145"
	I1026 00:44:37.525363   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525378   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525349   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525290   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525311   18362 addons.go:234] Setting addon volcano=true in "addons-602145"
	I1026 00:44:37.525463   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525253   18362 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-602145"
	I1026 00:44:37.525520   18362 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-602145"
	I1026 00:44:37.525546   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525805   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525364   18362 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-602145"
	I1026 00:44:37.525818   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525844   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525885   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525896   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525223   18362 addons.go:69] Setting inspektor-gadget=true in profile "addons-602145"
	I1026 00:44:37.525909   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525916   18362 addons.go:234] Setting addon inspektor-gadget=true in "addons-602145"
	I1026 00:44:37.525925   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525936   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.526125   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526141   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526151   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526169   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525807   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526253   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526260   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526276   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525311   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525311   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525296   18362 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-602145"
	I1026 00:44:37.526538   18362 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-602145"
	I1026 00:44:37.526649   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526687   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526852   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526874   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526879   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526892   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.527892   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.532618   18362 out.go:177] * Verifying Kubernetes components...
	I1026 00:44:37.525317   18362 addons.go:69] Setting volumesnapshots=true in profile "addons-602145"
	I1026 00:44:37.525330   18362 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-602145"
	I1026 00:44:37.533251   18362 addons.go:234] Setting addon volumesnapshots=true in "addons-602145"
	I1026 00:44:37.533287   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525251   18362 mustload.go:65] Loading cluster: addons-602145
	I1026 00:44:37.533490   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:37.533857   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.533880   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.533890   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.533927   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525805   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.534417   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.537530   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:37.525255   18362 addons.go:69] Setting ingress=true in profile "addons-602145"
	I1026 00:44:37.537650   18362 addons.go:234] Setting addon ingress=true in "addons-602145"
	I1026 00:44:37.537710   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.533285   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.547335   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I1026 00:44:37.547760   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I1026 00:44:37.548321   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.548847   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.548872   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.549047   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I1026 00:44:37.549332   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.549440   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.550026   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.550056   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.550065   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.550406   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.551725   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.551767   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.554099   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.554123   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.554313   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.554346   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.556482   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.556571   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I1026 00:44:37.556651   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I1026 00:44:37.556764   18362 addons.go:234] Setting addon default-storageclass=true in "addons-602145"
	I1026 00:44:37.556806   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.557155   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.557179   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.557357   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.557589   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.557603   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.557811   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.557822   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.558087   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.558101   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.558358   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I1026 00:44:37.558606   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.558635   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.558670   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.558713   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.558898   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.558957   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.559008   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.559426   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.559467   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.560165   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.560183   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.560589   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.560622   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.568211   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.568835   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.568864   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.572891   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I1026 00:44:37.581662   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1026 00:44:37.582774   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.583113   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I1026 00:44:37.583374   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.583401   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.583629   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.583741   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.583763   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I1026 00:44:37.584228   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.584248   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.584259   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.584277   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.584311   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.584740   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.584758   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.584813   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.584972   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.586257   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.586547   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I1026 00:44:37.587042   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.587092   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.587404   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.587414   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.587775   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.587817   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.588258   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.589032   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 00:44:37.589106   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I1026 00:44:37.590011   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.590079   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.590247   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.590287   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.590516   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
	I1026 00:44:37.590592   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.590613   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.590768   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.590811   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.591098   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.591216   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.591321   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.591489   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.591644   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.591683   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.592315   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.592331   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.592526   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 00:44:37.592652   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.593154   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.593188   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.594957   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.595201   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 00:44:37.595422   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.595494   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.597742   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1026 00:44:37.598090   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I1026 00:44:37.598120   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 00:44:37.598255   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.598823   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.598841   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.599208   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.599398   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.599738   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.600302   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.600319   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.600710   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.601050   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 00:44:37.601234   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.601273   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.601282   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.603097   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 00:44:37.603104   18362 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 00:44:37.604138   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I1026 00:44:37.604637   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.604787   18362 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 00:44:37.604809   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 00:44:37.604828   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.605221   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.605238   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.605627   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.605980   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 00:44:37.606513   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.606556   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.606752   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1026 00:44:37.607081   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.607505   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.607522   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.607825   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.608323   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.608359   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.608572   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.609452   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.609480   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.609528   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 00:44:37.609934   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.610103   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.610214   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.610310   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.612043   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 00:44:37.612061   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 00:44:37.612079   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.615678   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.616067   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.616091   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.616262   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.616423   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.616535   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.616631   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.625452   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1026 00:44:37.626195   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.626765   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.626790   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.627406   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.627967   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.628014   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.633598   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I1026 00:44:37.634099   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.634662   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.634680   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.635066   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.635244   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.635661   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I1026 00:44:37.636102   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.636587   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.636606   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.636958   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.637105   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.637273   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.638928   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.639062   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 00:44:37.640606   18362 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1026 00:44:37.640607   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 00:44:37.640678   18362 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 00:44:37.640697   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.642013   18362 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:44:37.642029   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 00:44:37.642135   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.643012   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1026 00:44:37.643839   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.644182   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I1026 00:44:37.644277   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.644290   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.644641   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.644836   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.644993   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.645317   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.645884   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.645901   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.645973   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I1026 00:44:37.645982   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.645996   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.646226   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.646286   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.646448   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.646784   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.646804   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.646820   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.647010   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.647033   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.647054   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.647094   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.647265   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.647399   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.647417   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.647455   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.647499   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.647840   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.647852   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.647989   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.649334   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.649402   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I1026 00:44:37.649950   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.650309   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.651140   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.651157   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.651174   18362 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1026 00:44:37.651600   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.651610   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.651801   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.651874   18362 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1026 00:44:37.653372   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 00:44:37.653390   18362 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 00:44:37.653402   18362 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1026 00:44:37.653409   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.653565   18362 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:44:37.653581   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1026 00:44:37.653596   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.654762   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I1026 00:44:37.655150   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.656152   18362 out.go:177]   - Using image docker.io/registry:2.8.3
	I1026 00:44:37.656419   18362 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-602145"
	I1026 00:44:37.656457   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.656827   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.656858   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.657505   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.657522   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.657574   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.657745   18362 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 00:44:37.657765   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 00:44:37.657782   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.657948   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.658288   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.659213   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.660542   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.661001   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661078   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661344   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.661355   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661362   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661755   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.661927   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.661943   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.661960   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661974   18362 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 00:44:37.662117   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.662151   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.662309   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.662318   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I1026 00:44:37.662360   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.662479   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.662821   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.662824   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.663827   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.663914   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.663941   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.664091   18362 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:44:37.664108   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 00:44:37.664123   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.664127   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.664271   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.664389   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.664672   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.665026   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.665243   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I1026 00:44:37.665590   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.666004   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.666022   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.666356   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.666543   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.668518   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I1026 00:44:37.668836   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I1026 00:44:37.669078   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.669160   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.669467   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 00:44:37.669711   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.669715   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.669733   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.669746   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.670093   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.670165   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.670392   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.670757   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.670773   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.671106   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.671227   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.671436   18362 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 00:44:37.671450   18362 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 00:44:37.671463   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.671477   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.671516   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1026 00:44:37.672099   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.672168   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.672256   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.672503   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.672596   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:37.672622   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:37.674584   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.674594   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.674626   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:37.674633   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:37.674641   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:37.674642   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:37.674647   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:37.674727   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.674894   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.674907   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.674979   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:37.674989   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:37.675000   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	W1026 00:44:37.675082   18362 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 00:44:37.675302   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.676501   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.676531   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.676681   18362 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1026 00:44:37.676823   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.676689   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1026 00:44:37.676839   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.677070   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.677248   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.677267   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.677282   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.677366   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.677527   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.677532   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.677693   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.677818   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.677928   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.678261   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.678285   18362 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 00:44:37.678299   18362 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1026 00:44:37.678314   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.679192   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.679607   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:37.679685   18362 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 00:44:37.680609   18362 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1026 00:44:37.681568   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 00:44:37.681586   18362 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 00:44:37.681603   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.682164   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.682378   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:37.682477   18362 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1026 00:44:37.682494   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 00:44:37.682510   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.682563   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.682580   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.683107   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.683284   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.683467   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.683616   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.683969   18362 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:44:37.683988   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 00:44:37.684006   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.686149   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I1026 00:44:37.686760   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.686861   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I1026 00:44:37.687545   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.687553   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.687661   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.687672   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.688039   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.688041   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.688086   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.688093   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.688239   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.688766   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.688850   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.688863   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.688915   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.689166   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.689188   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.689226   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.689525   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.689700   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.689730   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.689792   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.689810   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.689837   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.689994   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690140   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.690148   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.690183   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.690259   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.690401   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690409   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690525   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.690581   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	W1026 00:44:37.701041   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44234->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.701085   18362 retry.go:31] will retry after 317.954236ms: ssh: handshake failed: read tcp 192.168.39.1:44234->192.168.39.207:22: read: connection reset by peer
	W1026 00:44:37.701165   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44246->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.701182   18362 retry.go:31] will retry after 242.443302ms: ssh: handshake failed: read tcp 192.168.39.1:44246->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.707347   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I1026 00:44:37.707712   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.708064   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.708080   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.708342   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.708455   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.710059   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.711647   18362 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 00:44:37.712855   18362 out.go:177]   - Using image docker.io/busybox:stable
	I1026 00:44:37.714048   18362 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:44:37.714066   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 00:44:37.714084   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.717234   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.717703   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.717722   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.717884   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.718033   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.718181   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.718263   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	W1026 00:44:37.718839   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44264->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.718858   18362 retry.go:31] will retry after 245.014763ms: ssh: handshake failed: read tcp 192.168.39.1:44264->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.928032   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 00:44:37.928057   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 00:44:38.040676   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 00:44:38.040702   18362 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 00:44:38.044989   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 00:44:38.045018   18362 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 00:44:38.132483   18362 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 00:44:38.132510   18362 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 00:44:38.158001   18362 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 00:44:38.158030   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1026 00:44:38.215125   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:44:38.232449   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:44:38.232477   18362 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 00:44:38.233017   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:44:38.238582   18362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 00:44:38.238769   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 00:44:38.250194   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 00:44:38.250213   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 00:44:38.269943   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 00:44:38.269972   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 00:44:38.290982   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 00:44:38.291014   18362 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 00:44:38.297019   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 00:44:38.305877   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 00:44:38.330805   18362 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:44:38.330825   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 00:44:38.353435   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 00:44:38.355323   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:44:38.424194   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 00:44:38.443435   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:44:38.456588   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:44:38.471090   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 00:44:38.471112   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 00:44:38.506967   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 00:44:38.506990   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 00:44:38.530279   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 00:44:38.530311   18362 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 00:44:38.535380   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:44:38.610025   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:44:38.741707   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 00:44:38.741731   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 00:44:38.747341   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 00:44:38.747360   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 00:44:38.756637   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 00:44:38.756657   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 00:44:38.863447   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 00:44:38.863473   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 00:44:38.873943   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 00:44:38.873967   18362 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 00:44:38.911896   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 00:44:39.105510   18362 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:39.105534   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 00:44:39.160191   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 00:44:39.160225   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 00:44:39.488417   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 00:44:39.488441   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 00:44:39.523799   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:39.711790   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 00:44:39.711825   18362 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 00:44:39.814961   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.599795174s)
	I1026 00:44:39.815007   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:39.815019   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:39.815315   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:39.815334   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:39.815361   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:39.815376   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:39.815384   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:39.815766   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:39.815778   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:39.815783   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:39.925289   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 00:44:39.925313   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 00:44:40.290359   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 00:44:40.290385   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 00:44:40.631685   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:44:40.631740   18362 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 00:44:40.954205   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:44:42.716580   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.483528072s)
	I1026 00:44:42.716612   18362 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.478000293s)
	I1026 00:44:42.716635   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716664   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.716721   18362 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.477905606s)
	I1026 00:44:42.716756   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.419714819s)
	I1026 00:44:42.716753   18362 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 00:44:42.716775   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716784   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.716846   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.410946596s)
	I1026 00:44:42.716866   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716874   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717155   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717163   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717178   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717183   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717192   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717200   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717207   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717214   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717221   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717228   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717272   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717308   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717317   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717330   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717336   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717449   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717474   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717484   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717500   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717507   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717598   18362 node_ready.go:35] waiting up to 6m0s for node "addons-602145" to be "Ready" ...
	I1026 00:44:42.717686   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717716   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717726   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.752808   18362 node_ready.go:49] node "addons-602145" has status "Ready":"True"
	I1026 00:44:42.752829   18362 node_ready.go:38] duration metric: took 35.207505ms for node "addons-602145" to be "Ready" ...
	I1026 00:44:42.752838   18362 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:44:42.823086   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.823107   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.823345   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.823367   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.823392   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.836076   18362 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:43.253149   18362 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-602145" context rescaled to 1 replicas
	I1026 00:44:43.431917   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.078445578s)
	I1026 00:44:43.431970   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.431976   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.076624651s)
	I1026 00:44:43.432021   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.007795594s)
	I1026 00:44:43.432025   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432108   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432115   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.988649739s)
	I1026 00:44:43.432138   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432150   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.431987   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432056   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432200   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432254   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.975635606s)
	I1026 00:44:43.432318   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432309   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.896905927s)
	I1026 00:44:43.432361   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432377   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432338   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432597   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432630   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432645   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432655   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432663   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432670   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432736   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432751   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432761   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432772   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432795   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432825   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432833   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432840   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432913   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432942   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432965   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432971   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432980   18362 addons.go:475] Verifying addon metrics-server=true in "addons-602145"
	I1026 00:44:43.433017   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433034   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433030   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.433044   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433056   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433108   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433119   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433126   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433132   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433174   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433181   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433189   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433195   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433485   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.433516   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433527   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433589   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433604   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433613   18362 addons.go:475] Verifying addon registry=true in "addons-602145"
	I1026 00:44:43.434342   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.434374   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.434381   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.434605   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.434638   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.434645   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.435854   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.435894   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.435901   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.436531   18362 out.go:177] * Verifying registry addon...
	I1026 00:44:43.438465   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 00:44:43.504446   18362 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:44:43.504480   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:43.533385   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.533411   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.533674   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.533695   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.533681   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.949853   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.469071   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.731892   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 00:44:44.731935   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:44.734738   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:44.735129   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:44.735158   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:44.735356   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:44.735538   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:44.735678   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:44.735812   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:44.895478   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:44.975559   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.984750   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 00:44:45.019449   18362 addons.go:234] Setting addon gcp-auth=true in "addons-602145"
	I1026 00:44:45.019505   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:45.019903   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:45.019950   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:45.034890   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I1026 00:44:45.035347   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:45.035830   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:45.035850   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:45.036171   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:45.036611   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:45.036664   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:45.051378   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I1026 00:44:45.051875   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:45.052365   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:45.052398   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:45.052786   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:45.053001   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:45.054512   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:45.054755   18362 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 00:44:45.054783   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:45.057144   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:45.057472   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:45.057500   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:45.057639   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:45.057807   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:45.057966   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:45.058136   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:45.446069   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:45.693687   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.083625407s)
	I1026 00:44:45.693731   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.781796676s)
	I1026 00:44:45.693766   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.693784   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.693738   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.693837   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.693844   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.170009353s)
	W1026 00:44:45.693885   18362 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:44:45.693908   18362 retry.go:31] will retry after 342.657784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:44:45.694030   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694047   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.694056   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.694070   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.694237   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:45.694243   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694262   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.694271   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.694282   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.694326   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694350   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.695532   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.695549   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.695560   18362 addons.go:475] Verifying addon ingress=true in "addons-602145"
	I1026 00:44:45.695758   18362 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-602145 service yakd-dashboard -n yakd-dashboard
	
	I1026 00:44:45.696906   18362 out.go:177] * Verifying ingress addon...
	I1026 00:44:45.699303   18362 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 00:44:45.724699   18362 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 00:44:45.724724   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:45.942829   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.037746   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:46.204124   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:46.463536   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.723036   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:46.735975   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.781710044s)
	I1026 00:44:46.736024   18362 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.681245868s)
	I1026 00:44:46.736027   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:46.736181   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:46.736527   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:46.736549   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:46.736557   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:46.736564   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:46.736571   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:46.736774   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:46.736806   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:46.736819   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:46.736829   18362 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-602145"
	I1026 00:44:46.737666   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:46.738662   18362 out.go:177] * Verifying csi-hostpath-driver addon...
	I1026 00:44:46.740355   18362 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 00:44:46.741014   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 00:44:46.741807   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 00:44:46.741823   18362 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 00:44:46.753149   18362 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:44:46.753169   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:46.882573   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 00:44:46.882599   18362 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 00:44:46.943810   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.965144   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:44:46.965163   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 00:44:47.047486   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:44:47.205884   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:47.549679   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:47.550319   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:47.557275   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:47.703212   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:47.805956   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:47.957870   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:48.204067   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:48.245098   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:48.361982   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.324185805s)
	I1026 00:44:48.362033   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362088   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362098   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.314567887s)
	I1026 00:44:48.362138   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362155   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362360   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362375   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.362383   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362391   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362495   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:48.362507   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362551   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.362564   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362572   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362592   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362606   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.364153   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.364180   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.365500   18362 addons.go:475] Verifying addon gcp-auth=true in "addons-602145"
	I1026 00:44:48.367541   18362 out.go:177] * Verifying gcp-auth addon...
	I1026 00:44:48.369997   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 00:44:48.372722   18362 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 00:44:48.372736   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:48.441724   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:48.703279   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:48.744887   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:48.873772   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:48.945509   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:49.205267   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:49.246195   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:49.375470   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:49.443789   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:49.703625   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:49.747021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:49.841644   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:49.873239   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:49.942516   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:50.203094   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:50.245485   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:50.373329   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:50.442392   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:50.704477   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:50.746397   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:50.873647   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:50.943332   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:51.205843   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:51.362986   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:51.519995   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:51.520530   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:51.705334   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:51.745974   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:51.842552   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:51.873122   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:51.942249   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:52.203955   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:52.246482   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:52.373399   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:52.442289   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:52.703594   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:52.746304   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:52.873837   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:52.942797   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:53.205474   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:53.306800   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:53.405814   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:53.443231   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:53.704811   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:53.746870   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:53.842727   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:53.873246   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:53.942900   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:54.203283   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:54.245640   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:54.373845   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:54.441683   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:54.702911   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:54.745894   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:54.874021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:54.944100   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:55.205119   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:55.244943   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:55.373657   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:55.442641   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:55.703833   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:55.745846   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:55.877002   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:55.975134   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:56.204207   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:56.246187   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:56.342170   18362 pod_ready.go:93] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.342196   18362 pod_ready.go:82] duration metric: took 13.506093961s for pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.342207   18362 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.343926   18362 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-27zzz" not found
	I1026 00:44:56.343943   18362 pod_ready.go:82] duration metric: took 1.730601ms for pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace to be "Ready" ...
	E1026 00:44:56.343951   18362 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-27zzz" not found
	I1026 00:44:56.343958   18362 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.348350   18362 pod_ready.go:93] pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.348367   18362 pod_ready.go:82] duration metric: took 4.403788ms for pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.348378   18362 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.352322   18362 pod_ready.go:93] pod "etcd-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.352339   18362 pod_ready.go:82] duration metric: took 3.953676ms for pod "etcd-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.352346   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.356524   18362 pod_ready.go:93] pod "kube-apiserver-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.356544   18362 pod_ready.go:82] duration metric: took 4.190127ms for pod "kube-apiserver-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.356554   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.372514   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:56.443587   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:56.540214   18362 pod_ready.go:93] pod "kube-controller-manager-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.540242   18362 pod_ready.go:82] duration metric: took 183.679309ms for pod "kube-controller-manager-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.540256   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zmp9p" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.719402   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:56.744353   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:56.873914   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:56.941651   18362 pod_ready.go:93] pod "kube-proxy-zmp9p" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.941679   18362 pod_ready.go:82] duration metric: took 401.416415ms for pod "kube-proxy-zmp9p" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.941691   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.942034   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:57.205326   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:57.245298   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:57.340147   18362 pod_ready.go:93] pod "kube-scheduler-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:57.340172   18362 pod_ready.go:82] duration metric: took 398.474577ms for pod "kube-scheduler-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:57.340182   18362 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:57.374156   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:57.442078   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:57.703414   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:57.744761   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:57.873438   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:57.943106   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:58.203321   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:58.245332   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:58.373612   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:58.442763   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:58.704102   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:58.745661   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:58.872933   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:58.942251   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:59.203926   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:59.245830   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:59.346756   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:59.373514   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:59.442649   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:59.704361   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:59.805106   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:59.872826   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:59.942023   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:00.202986   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:00.245136   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:00.373390   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:00.442345   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:00.708857   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:00.745797   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:00.874326   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:00.942406   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:01.203852   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:01.523881   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:01.524414   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:01.524879   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:01.781726   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:01.783834   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:01.784227   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:01.882559   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:01.943287   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:02.209786   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:02.246413   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:02.373542   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:02.443290   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:02.703820   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:02.745281   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:02.874104   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:02.942494   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:03.206266   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:03.245887   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:03.373668   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:03.443101   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:03.703504   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:03.746877   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:03.846919   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:03.874285   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:03.942660   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:04.203304   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:04.245799   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:04.373094   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:04.442378   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:04.704115   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:04.745698   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:04.873998   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:04.942566   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:05.204699   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:05.244827   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:05.375213   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:05.442746   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:05.706556   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:05.746964   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:05.849014   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:05.874917   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:05.941951   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:06.203121   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:06.244747   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:06.372654   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:06.443598   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:06.703436   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:06.749167   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:06.874367   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:06.942724   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:07.202754   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:07.245540   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:07.374815   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:07.441652   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:07.703513   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:07.745931   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:07.874202   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:07.942630   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:08.204363   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:08.245617   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:08.346158   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:08.374115   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:08.441895   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:08.708945   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:08.745960   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:08.874476   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:08.942871   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:09.204122   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:09.245298   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:09.373372   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:09.443443   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:09.704682   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:09.744928   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:09.874968   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:09.975875   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:10.203299   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:10.245007   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:10.374254   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:10.442084   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:10.703910   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:10.745834   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:10.847059   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:10.873791   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:10.941681   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:11.203677   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:11.517728   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:11.518085   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:11.520655   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:11.703377   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:11.745588   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:11.873978   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:11.942034   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:12.203617   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:12.246445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:12.375533   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:12.476158   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:12.703613   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:12.745773   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:12.851302   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:12.873251   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:12.942460   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:13.203975   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:13.245184   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:13.373276   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:13.445082   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:13.703701   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:13.744735   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:13.874243   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:13.942071   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:14.203475   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:14.245882   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:14.373800   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:14.441539   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:14.702810   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:14.745864   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:14.873647   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:14.942798   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:15.203935   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:15.245657   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:15.346178   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:15.373082   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:15.442334   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:15.703410   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:15.745174   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:15.874384   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:15.942177   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:16.204126   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:16.245428   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:16.380021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:16.442424   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:16.704115   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:16.745604   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:16.873445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:16.942687   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:17.203770   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:17.245539   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:17.346253   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:17.374393   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:17.443133   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:17.704708   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:17.746018   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:17.873991   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:17.941872   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:18.204076   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:18.244938   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:18.374537   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:18.443107   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:18.703385   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:18.745999   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:18.873760   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:18.942473   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:19.204188   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:19.245672   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:19.346495   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:19.373531   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:19.442934   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:19.703943   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:19.745986   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:19.874432   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:19.975215   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:20.204299   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:20.245866   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:20.373534   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:20.442584   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:20.703845   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:20.745768   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:20.873632   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:20.942856   18362 kapi.go:107] duration metric: took 37.504386247s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 00:45:21.203691   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:21.718316   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:21.719992   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:21.721813   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:21.727889   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:21.746266   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:21.876386   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:22.203397   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:22.246100   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:22.374307   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:22.703433   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:22.751135   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:22.874402   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:23.203594   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:23.246197   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:23.373642   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:23.703199   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:23.745499   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:23.845679   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:23.873091   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:24.204350   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:24.245855   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:24.373952   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:24.708382   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:24.746052   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:24.873844   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:25.203312   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:25.245246   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:25.373404   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:25.703887   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:25.745384   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:25.846166   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:25.873728   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:26.204992   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:26.246774   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:26.373153   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.034642   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.034774   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.035772   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.208103   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.310868   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.374097   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.705208   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.746489   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.846304   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:27.873446   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:28.203266   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:28.245142   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:28.373287   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:28.703102   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:28.746115   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:28.882349   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:29.202937   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:29.245150   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:29.373432   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:29.703219   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:29.745355   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:29.874163   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:30.204270   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:30.245812   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:30.346348   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:30.372767   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:30.703687   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:30.758149   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:30.876840   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:31.203626   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:31.246555   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:31.373264   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:31.704643   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:31.744999   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:31.872829   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:32.213626   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:32.245754   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:32.347297   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:32.375821   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:32.703358   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:32.745589   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:32.873685   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:33.685837   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:33.689347   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:33.689544   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:33.788307   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:33.788838   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:33.882954   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:34.203196   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:34.245612   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:34.375063   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:34.705246   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:34.745890   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:34.851682   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:34.879350   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:35.204282   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:35.247070   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:35.373536   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:35.705310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:35.746615   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:35.874691   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:36.205556   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:36.306953   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:36.373541   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:36.704426   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:36.747970   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:36.873760   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:37.203880   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:37.244788   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:37.347153   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:37.375129   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:37.704319   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:37.746360   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:37.873600   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:38.203914   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:38.304512   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:38.373711   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:38.703310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:38.745662   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:38.873875   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:39.204326   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:39.305799   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:39.348170   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:39.375024   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:39.704110   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:39.745976   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:39.873391   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:40.203434   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:40.246393   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:40.374693   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:40.704215   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:40.746495   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:40.873587   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:41.203662   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:41.249521   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:41.374419   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:41.704331   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:41.745312   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:41.846428   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:41.873302   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:42.204261   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:42.309990   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:42.405253   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:42.703723   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:42.745313   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:42.873147   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:43.204558   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:43.249371   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:43.373507   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:43.703540   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:43.770832   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:43.847333   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:43.873632   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:44.203399   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:44.251147   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:44.373454   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:44.708453   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:44.747356   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:44.873558   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:45.203629   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:45.245567   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:45.372905   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:45.706111   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:45.752737   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:45.875564   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:46.206010   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:46.246093   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:46.591522   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:46.594740   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:46.704200   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:46.745563   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:46.873700   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:47.207585   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:47.245372   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:47.396791   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:47.705237   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:47.746261   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:47.873591   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:48.203607   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:48.245750   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:48.373572   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:48.703237   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:48.746179   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:48.846385   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:48.872741   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:49.203417   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:49.245638   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:49.374087   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:49.706144   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:49.746583   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:49.874153   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:50.204285   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:50.245595   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:50.373859   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:50.704066   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:50.745445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:50.847331   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:50.874005   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:51.204304   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:51.304987   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:51.374293   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:51.704508   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:51.745943   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:51.873702   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.203338   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:52.245748   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:52.373185   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.946854   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:52.946911   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:52.946989   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.948333   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.212300   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.245376   18362 kapi.go:107] duration metric: took 1m6.504358421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 00:45:53.373235   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:53.704173   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.873751   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:54.204310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:54.374257   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:54.704159   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:54.873323   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:55.204425   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:55.345758   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:55.373543   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:55.703801   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:55.873292   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:56.203933   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:56.373571   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:56.703678   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:56.872775   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:57.203929   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:57.351056   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:57.375266   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:57.704608   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:57.874394   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:58.203938   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:58.373398   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:58.703241   18362 kapi.go:107] duration metric: took 1m13.003934885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 00:45:58.873818   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:59.374290   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:59.845661   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:59.873380   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:00.373811   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.220661   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.373704   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.846338   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:01.873373   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:02.373762   18362 kapi.go:107] duration metric: took 1m14.003763064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 00:46:02.375526   18362 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-602145 cluster.
	I1026 00:46:02.377048   18362 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 00:46:02.378493   18362 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 00:46:02.379964   18362 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, ingress-dns, cloud-spanner, storage-provisioner-rancher, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 00:46:02.381214   18362 addons.go:510] duration metric: took 1m24.856119786s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner default-storageclass metrics-server inspektor-gadget ingress-dns cloud-spanner storage-provisioner-rancher yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 00:46:03.846374   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:05.846858   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:08.346208   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:10.846564   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:13.345548   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:15.345909   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:17.346689   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:19.346733   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:21.847027   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:23.347691   18362 pod_ready.go:93] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"True"
	I1026 00:46:23.347715   18362 pod_ready.go:82] duration metric: took 1m26.007527804s for pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.347725   18362 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.352273   18362 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace has status "Ready":"True"
	I1026 00:46:23.352295   18362 pod_ready.go:82] duration metric: took 4.562869ms for pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.352309   18362 pod_ready.go:39] duration metric: took 1m40.599451217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:46:23.352326   18362 api_server.go:52] waiting for apiserver process to appear ...
	I1026 00:46:23.352352   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:23.352399   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:23.405865   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:23.405887   18362 cri.go:89] found id: ""
	I1026 00:46:23.405896   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:23.405946   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.410011   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:23.410063   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:23.451730   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:23.451750   18362 cri.go:89] found id: ""
	I1026 00:46:23.451757   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:23.451801   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.455402   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:23.455464   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:23.495737   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:23.495768   18362 cri.go:89] found id: ""
	I1026 00:46:23.495778   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:23.495836   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.502808   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:23.502875   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:23.537833   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:23.537864   18362 cri.go:89] found id: ""
	I1026 00:46:23.537873   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:23.537931   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.541944   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:23.542029   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:23.579051   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:23.579072   18362 cri.go:89] found id: ""
	I1026 00:46:23.579080   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:23.579124   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.582814   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:23.582889   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:23.617879   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:23.617906   18362 cri.go:89] found id: ""
	I1026 00:46:23.617914   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:23.617958   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.622279   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:23.622349   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:23.658673   18362 cri.go:89] found id: ""
	I1026 00:46:23.658705   18362 logs.go:282] 0 containers: []
	W1026 00:46:23.658716   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:23.658727   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:23.658741   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:24.761490   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:24.761541   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:24.886303   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:24.886335   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:24.934529   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:24.934563   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:25.000977   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:25.001012   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:25.038835   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:25.038864   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:25.074707   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:25.074732   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:25.133369   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:25.133427   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:25.215955   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:25.215988   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:25.230560   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:25.230597   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:25.271642   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:25.271669   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:27.821293   18362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:46:27.841076   18362 api_server.go:72] duration metric: took 1m50.316047956s to wait for apiserver process to appear ...
	I1026 00:46:27.841105   18362 api_server.go:88] waiting for apiserver healthz status ...
	I1026 00:46:27.841135   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:27.841177   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:27.879218   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:27.879251   18362 cri.go:89] found id: ""
	I1026 00:46:27.879261   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:27.879319   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.884135   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:27.884197   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:27.919716   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:27.919740   18362 cri.go:89] found id: ""
	I1026 00:46:27.919747   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:27.919792   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.923742   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:27.923805   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:27.963665   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:27.963690   18362 cri.go:89] found id: ""
	I1026 00:46:27.963699   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:27.963751   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.967426   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:27.967480   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:28.004026   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:28.004055   18362 cri.go:89] found id: ""
	I1026 00:46:28.004064   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:28.004111   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.011483   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:28.011563   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:28.054008   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:28.054027   18362 cri.go:89] found id: ""
	I1026 00:46:28.054036   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:28.054089   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.058073   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:28.058117   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:28.094426   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:28.094448   18362 cri.go:89] found id: ""
	I1026 00:46:28.094459   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:28.094503   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.098143   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:28.098201   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:28.142839   18362 cri.go:89] found id: ""
	I1026 00:46:28.142858   18362 logs.go:282] 0 containers: []
	W1026 00:46:28.142865   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:28.142872   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:28.142883   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:28.178602   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:28.178637   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:28.225911   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:28.225944   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:28.239815   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:28.239842   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:28.345958   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:28.345987   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:28.394436   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:28.394478   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:28.451960   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:28.451993   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:29.467534   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:29.467575   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:29.534713   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:29.534750   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:29.621723   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:29.621765   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:29.687733   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:29.687764   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:32.227685   18362 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I1026 00:46:32.231993   18362 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I1026 00:46:32.233049   18362 api_server.go:141] control plane version: v1.31.2
	I1026 00:46:32.233072   18362 api_server.go:131] duration metric: took 4.391960342s to wait for apiserver health ...
	I1026 00:46:32.233079   18362 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 00:46:32.233095   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:32.233135   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:32.282289   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:32.282312   18362 cri.go:89] found id: ""
	I1026 00:46:32.282319   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:32.282362   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.296702   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:32.296786   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:32.363658   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:32.363686   18362 cri.go:89] found id: ""
	I1026 00:46:32.363693   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:32.363739   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.368536   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:32.368608   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:32.417056   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:32.417080   18362 cri.go:89] found id: ""
	I1026 00:46:32.417087   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:32.417134   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.420943   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:32.421003   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:32.463944   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:32.463970   18362 cri.go:89] found id: ""
	I1026 00:46:32.463978   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:32.464022   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.468021   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:32.468085   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:32.522711   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:32.522736   18362 cri.go:89] found id: ""
	I1026 00:46:32.522746   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:32.522803   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.526962   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:32.527038   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:32.563485   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:32.563509   18362 cri.go:89] found id: ""
	I1026 00:46:32.563518   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:32.563563   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.567368   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:32.567424   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:32.610034   18362 cri.go:89] found id: ""
	I1026 00:46:32.610059   18362 logs.go:282] 0 containers: []
	W1026 00:46:32.610067   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:32.610075   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:32.610085   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:32.664057   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:32.664085   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:32.740212   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:32.740246   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:32.788784   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:32.788816   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:32.841082   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:32.841112   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:32.899056   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:32.899092   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:33.764981   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:33.765029   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:33.850852   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:33.850893   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:33.865960   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:33.865989   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:34.001743   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:34.001771   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:34.062545   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:34.062582   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:36.605290   18362 system_pods.go:59] 18 kube-system pods found
	I1026 00:46:36.605322   18362 system_pods.go:61] "amd-gpu-device-plugin-j7hfs" [998a3db9-77d1-44e5-8056-30bfb299237f] Running
	I1026 00:46:36.605328   18362 system_pods.go:61] "coredns-7c65d6cfc9-rg759" [0fc72168-a4b5-4ffb-a60a-879932edb065] Running
	I1026 00:46:36.605332   18362 system_pods.go:61] "csi-hostpath-attacher-0" [1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0] Running
	I1026 00:46:36.605335   18362 system_pods.go:61] "csi-hostpath-resizer-0" [e305542d-5cae-4b7b-b8eb-8746838c449a] Running
	I1026 00:46:36.605338   18362 system_pods.go:61] "csi-hostpathplugin-klclf" [7c681fc4-5331-4a8c-8836-434972b7501f] Running
	I1026 00:46:36.605341   18362 system_pods.go:61] "etcd-addons-602145" [f01141d1-f024-4f45-b88e-316ef438b6db] Running
	I1026 00:46:36.605344   18362 system_pods.go:61] "kube-apiserver-addons-602145" [1a03095d-dcd7-46b6-bd82-2d57dccd04f4] Running
	I1026 00:46:36.605347   18362 system_pods.go:61] "kube-controller-manager-addons-602145" [3da3edd9-5929-4557-98b6-a308808e4f0e] Running
	I1026 00:46:36.605350   18362 system_pods.go:61] "kube-ingress-dns-minikube" [025a59e5-d16f-4e88-b27a-df9b744f402c] Running
	I1026 00:46:36.605354   18362 system_pods.go:61] "kube-proxy-zmp9p" [a8ec7e5b-66ba-4d78-9fb6-7391387d3926] Running
	I1026 00:46:36.605357   18362 system_pods.go:61] "kube-scheduler-addons-602145" [b97d691f-c7d5-46af-9e01-cce925d7b07a] Running
	I1026 00:46:36.605360   18362 system_pods.go:61] "metrics-server-84c5f94fbc-h4pf5" [d14866cc-8862-49b0-991e-5bebca6ba0c0] Running
	I1026 00:46:36.605363   18362 system_pods.go:61] "nvidia-device-plugin-daemonset-njbmm" [d10ea740-696c-405e-abda-87f78aad39bb] Running
	I1026 00:46:36.605366   18362 system_pods.go:61] "registry-66c9cd494c-pgk2s" [7960692c-0aab-43a0-89c7-aca8e7b3647f] Running
	I1026 00:46:36.605368   18362 system_pods.go:61] "registry-proxy-l5dxz" [d343ebc6-cfcc-44d1-974f-3bb153afc92e] Running
	I1026 00:46:36.605371   18362 system_pods.go:61] "snapshot-controller-56fcc65765-jg7jh" [88ad95c2-df86-4bf5-b748-a0356c7d9668] Running
	I1026 00:46:36.605375   18362 system_pods.go:61] "snapshot-controller-56fcc65765-m4s9s" [29e55a42-07fd-48a7-bef4-fbe602d75ff1] Running
	I1026 00:46:36.605378   18362 system_pods.go:61] "storage-provisioner" [7d49ab38-56fb-43aa-a6b9-153edaf888b2] Running
	I1026 00:46:36.605386   18362 system_pods.go:74] duration metric: took 4.372301823s to wait for pod list to return data ...
	I1026 00:46:36.605395   18362 default_sa.go:34] waiting for default service account to be created ...
	I1026 00:46:36.607661   18362 default_sa.go:45] found service account: "default"
	I1026 00:46:36.607681   18362 default_sa.go:55] duration metric: took 2.281632ms for default service account to be created ...
	I1026 00:46:36.607688   18362 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 00:46:36.614138   18362 system_pods.go:86] 18 kube-system pods found
	I1026 00:46:36.614162   18362 system_pods.go:89] "amd-gpu-device-plugin-j7hfs" [998a3db9-77d1-44e5-8056-30bfb299237f] Running
	I1026 00:46:36.614168   18362 system_pods.go:89] "coredns-7c65d6cfc9-rg759" [0fc72168-a4b5-4ffb-a60a-879932edb065] Running
	I1026 00:46:36.614173   18362 system_pods.go:89] "csi-hostpath-attacher-0" [1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0] Running
	I1026 00:46:36.614176   18362 system_pods.go:89] "csi-hostpath-resizer-0" [e305542d-5cae-4b7b-b8eb-8746838c449a] Running
	I1026 00:46:36.614180   18362 system_pods.go:89] "csi-hostpathplugin-klclf" [7c681fc4-5331-4a8c-8836-434972b7501f] Running
	I1026 00:46:36.614185   18362 system_pods.go:89] "etcd-addons-602145" [f01141d1-f024-4f45-b88e-316ef438b6db] Running
	I1026 00:46:36.614188   18362 system_pods.go:89] "kube-apiserver-addons-602145" [1a03095d-dcd7-46b6-bd82-2d57dccd04f4] Running
	I1026 00:46:36.614194   18362 system_pods.go:89] "kube-controller-manager-addons-602145" [3da3edd9-5929-4557-98b6-a308808e4f0e] Running
	I1026 00:46:36.614201   18362 system_pods.go:89] "kube-ingress-dns-minikube" [025a59e5-d16f-4e88-b27a-df9b744f402c] Running
	I1026 00:46:36.614205   18362 system_pods.go:89] "kube-proxy-zmp9p" [a8ec7e5b-66ba-4d78-9fb6-7391387d3926] Running
	I1026 00:46:36.614211   18362 system_pods.go:89] "kube-scheduler-addons-602145" [b97d691f-c7d5-46af-9e01-cce925d7b07a] Running
	I1026 00:46:36.614214   18362 system_pods.go:89] "metrics-server-84c5f94fbc-h4pf5" [d14866cc-8862-49b0-991e-5bebca6ba0c0] Running
	I1026 00:46:36.614220   18362 system_pods.go:89] "nvidia-device-plugin-daemonset-njbmm" [d10ea740-696c-405e-abda-87f78aad39bb] Running
	I1026 00:46:36.614224   18362 system_pods.go:89] "registry-66c9cd494c-pgk2s" [7960692c-0aab-43a0-89c7-aca8e7b3647f] Running
	I1026 00:46:36.614229   18362 system_pods.go:89] "registry-proxy-l5dxz" [d343ebc6-cfcc-44d1-974f-3bb153afc92e] Running
	I1026 00:46:36.614232   18362 system_pods.go:89] "snapshot-controller-56fcc65765-jg7jh" [88ad95c2-df86-4bf5-b748-a0356c7d9668] Running
	I1026 00:46:36.614236   18362 system_pods.go:89] "snapshot-controller-56fcc65765-m4s9s" [29e55a42-07fd-48a7-bef4-fbe602d75ff1] Running
	I1026 00:46:36.614239   18362 system_pods.go:89] "storage-provisioner" [7d49ab38-56fb-43aa-a6b9-153edaf888b2] Running
	I1026 00:46:36.614247   18362 system_pods.go:126] duration metric: took 6.546085ms to wait for k8s-apps to be running ...
	I1026 00:46:36.614254   18362 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 00:46:36.614296   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:46:36.629828   18362 system_svc.go:56] duration metric: took 15.565045ms WaitForService to wait for kubelet
	I1026 00:46:36.629857   18362 kubeadm.go:582] duration metric: took 1m59.104837393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:46:36.629880   18362 node_conditions.go:102] verifying NodePressure condition ...
	I1026 00:46:36.633214   18362 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 00:46:36.633238   18362 node_conditions.go:123] node cpu capacity is 2
	I1026 00:46:36.633250   18362 node_conditions.go:105] duration metric: took 3.365385ms to run NodePressure ...
	I1026 00:46:36.633258   18362 start.go:241] waiting for startup goroutines ...
	I1026 00:46:36.633265   18362 start.go:246] waiting for cluster config update ...
	I1026 00:46:36.633280   18362 start.go:255] writing updated cluster config ...
	I1026 00:46:36.633555   18362 ssh_runner.go:195] Run: rm -f paused
	I1026 00:46:36.681760   18362 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 00:46:36.683796   18362 out.go:177] * Done! kubectl is now configured to use "addons-602145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.286266932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903778286239506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af1cc153-f070-42b5-bd70-342fddda0985 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.286686296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb07509d-975e-4440-9dcc-9f07ce2c4761 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.286756686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb07509d-975e-4440-9dcc-9f07ce2c4761 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.287061997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f95e5d7379050e92fd74abef606f79ece2ba70e8460f194a70e0cedbbb5ca0,PodSandboxId:17d85262102d789d71aa985839bf2ba2ec8a5407d6c70323d1e0945eac960c29,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729903557300263145,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-5pbh4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c4f975d-c2ef-47a5-b364-a565288100a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3725f03320a67e460617bc066007c1bb55dc1925cf459d6f497b5257c0df8c2e,PodSandboxId:20825a5abc25ac68c71efc046db976305bb2c56751333af452b38e21559523e2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1729903539060093537,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9hm7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2869a676-d371-4aad-981f-b857fd3eba07,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e3ba5702439620efd6166001f897d6418951cd90c6926e3b747facc8b074d8,PodSandboxId:10c06c57266d1ae2de479aa89a61f3c52f64114010998f28fb9f1813a1c327af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729903538118225932,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2rtmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34255f88-550d-4860-b77a-e91885903153,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231c015381dc6c22c819e49bab1b6fed73335db6e3daac9bf5e3144d4db5c550,PodSandboxId:5b4eed433f4cf19fa33390ea17e4be2596d0211c1de95310110378f6998920e4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729903493153034781,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 025a59e5-d16f-4e88-b27a-df9b744f402c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26
e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45e
fa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f
83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10
088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=fb07509d-975e-4440-9dcc-9f07ce2c4761 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.322565346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d34a6a7b-5d80-4229-a03e-e9851e3e0130 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.322640267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d34a6a7b-5d80-4229-a03e-e9851e3e0130 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.323706270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=292114e9-ad4c-41b6-8559-d2297e769b42 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.325098527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903778325071718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=292114e9-ad4c-41b6-8559-d2297e769b42 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.325789319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e5cec0d-b321-445f-ac2a-ebc4ef988154 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.325852770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e5cec0d-b321-445f-ac2a-ebc4ef988154 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.326207503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f95e5d7379050e92fd74abef606f79ece2ba70e8460f194a70e0cedbbb5ca0,PodSandboxId:17d85262102d789d71aa985839bf2ba2ec8a5407d6c70323d1e0945eac960c29,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729903557300263145,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-5pbh4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c4f975d-c2ef-47a5-b364-a565288100a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3725f03320a67e460617bc066007c1bb55dc1925cf459d6f497b5257c0df8c2e,PodSandboxId:20825a5abc25ac68c71efc046db976305bb2c56751333af452b38e21559523e2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1729903539060093537,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9hm7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2869a676-d371-4aad-981f-b857fd3eba07,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e3ba5702439620efd6166001f897d6418951cd90c6926e3b747facc8b074d8,PodSandboxId:10c06c57266d1ae2de479aa89a61f3c52f64114010998f28fb9f1813a1c327af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729903538118225932,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2rtmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34255f88-550d-4860-b77a-e91885903153,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231c015381dc6c22c819e49bab1b6fed73335db6e3daac9bf5e3144d4db5c550,PodSandboxId:5b4eed433f4cf19fa33390ea17e4be2596d0211c1de95310110378f6998920e4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729903493153034781,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 025a59e5-d16f-4e88-b27a-df9b744f402c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26
e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45e
fa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f
83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10
088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=8e5cec0d-b321-445f-ac2a-ebc4ef988154 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.361306967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82eb874c-30d4-43fc-a43f-40bef27adba0 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.361390450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82eb874c-30d4-43fc-a43f-40bef27adba0 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.362139059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0c3d856-f29e-41b9-8e7c-ed6563258f10 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.363343212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903778363317766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0c3d856-f29e-41b9-8e7c-ed6563258f10 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.363834515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ab5be6d-e4f4-4496-a850-70d64d1e2ba3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.363887201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ab5be6d-e4f4-4496-a850-70d64d1e2ba3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.364244247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f95e5d7379050e92fd74abef606f79ece2ba70e8460f194a70e0cedbbb5ca0,PodSandboxId:17d85262102d789d71aa985839bf2ba2ec8a5407d6c70323d1e0945eac960c29,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729903557300263145,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-5pbh4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c4f975d-c2ef-47a5-b364-a565288100a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3725f03320a67e460617bc066007c1bb55dc1925cf459d6f497b5257c0df8c2e,PodSandboxId:20825a5abc25ac68c71efc046db976305bb2c56751333af452b38e21559523e2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1729903539060093537,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9hm7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2869a676-d371-4aad-981f-b857fd3eba07,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e3ba5702439620efd6166001f897d6418951cd90c6926e3b747facc8b074d8,PodSandboxId:10c06c57266d1ae2de479aa89a61f3c52f64114010998f28fb9f1813a1c327af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729903538118225932,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2rtmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34255f88-550d-4860-b77a-e91885903153,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231c015381dc6c22c819e49bab1b6fed73335db6e3daac9bf5e3144d4db5c550,PodSandboxId:5b4eed433f4cf19fa33390ea17e4be2596d0211c1de95310110378f6998920e4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729903493153034781,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 025a59e5-d16f-4e88-b27a-df9b744f402c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26
e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45e
fa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f
83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10
088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=7ab5be6d-e4f4-4496-a850-70d64d1e2ba3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.395718888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9010efa-8bd9-4296-8063-b21f8c8bcfb3 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.395802647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9010efa-8bd9-4296-8063-b21f8c8bcfb3 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.396940308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bcb33ed-c57d-4815-a60c-82bab54393f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.398078599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903778398050560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bcb33ed-c57d-4815-a60c-82bab54393f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.398723447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e74033c-8d5b-47b6-8e6d-f5c152a1aa84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.398781624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e74033c-8d5b-47b6-8e6d-f5c152a1aa84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:49:38 addons-602145 crio[665]: time="2024-10-26 00:49:38.399072791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f95e5d7379050e92fd74abef606f79ece2ba70e8460f194a70e0cedbbb5ca0,PodSandboxId:17d85262102d789d71aa985839bf2ba2ec8a5407d6c70323d1e0945eac960c29,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729903557300263145,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-5pbh4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c4f975d-c2ef-47a5-b364-a565288100a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3725f03320a67e460617bc066007c1bb55dc1925cf459d6f497b5257c0df8c2e,PodSandboxId:20825a5abc25ac68c71efc046db976305bb2c56751333af452b38e21559523e2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1729903539060093537,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9hm7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2869a676-d371-4aad-981f-b857fd3eba07,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e3ba5702439620efd6166001f897d6418951cd90c6926e3b747facc8b074d8,PodSandboxId:10c06c57266d1ae2de479aa89a61f3c52f64114010998f28fb9f1813a1c327af,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729903538118225932,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2rtmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 34255f88-550d-4860-b77a-e91885903153,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231c015381dc6c22c819e49bab1b6fed73335db6e3daac9bf5e3144d4db5c550,PodSandboxId:5b4eed433f4cf19fa33390ea17e4be2596d0211c1de95310110378f6998920e4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729903493153034781,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 025a59e5-d16f-4e88-b27a-df9b744f402c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26
e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45e
fa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f
83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10
088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=9e74033c-8d5b-47b6-8e6d-f5c152a1aa84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2dbf9cb98ca3e       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   eeec9e8541b63       nginx
	37fe19245e37f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   5339180d9cbb6       busybox
	85f95e5d73790       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   17d85262102d7       ingress-nginx-controller-5f85ff4588-5pbh4
	3725f03320a67       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   20825a5abc25a       ingress-nginx-admission-patch-9hm7l
	07e3ba5702439       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   10c06c57266d1       ingress-nginx-admission-create-2rtmc
	02ea5d941e542       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   3214a327c5408       metrics-server-84c5f94fbc-h4pf5
	88738203db747       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   8fcf2dfac27d8       amd-gpu-device-plugin-j7hfs
	231c015381dc6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   5b4eed433f4cf       kube-ingress-dns-minikube
	a985b19d6ed2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   038c192f80c6a       storage-provisioner
	5ab5a29a69bd0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   2c7291542e576       coredns-7c65d6cfc9-rg759
	bb77e77566e84       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   346a91f8335e0       kube-proxy-zmp9p
	39fbd6c96fd56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   49edc5bc1a50f       etcd-addons-602145
	6ae7464e87276       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   d6cf525d53665       kube-scheduler-addons-602145
	89dbbaf2f83cd       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   132167104a886       kube-apiserver-addons-602145
	b45da4da24d6a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   38ee77ed691d7       kube-controller-manager-addons-602145
	
	
	==> coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] <==
	[INFO] 10.244.0.8:39639 - 7852 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000109699s
	[INFO] 10.244.0.8:39639 - 52872 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000186287s
	[INFO] 10.244.0.8:39639 - 13137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00010784s
	[INFO] 10.244.0.8:39639 - 16512 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077624s
	[INFO] 10.244.0.8:39639 - 36441 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000163458s
	[INFO] 10.244.0.8:39639 - 1119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000124703s
	[INFO] 10.244.0.8:39639 - 49913 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000087207s
	[INFO] 10.244.0.8:42922 - 28273 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112136s
	[INFO] 10.244.0.8:42922 - 28642 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014551s
	[INFO] 10.244.0.8:36050 - 40282 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087822s
	[INFO] 10.244.0.8:36050 - 40537 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006983s
	[INFO] 10.244.0.8:40149 - 36371 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054385s
	[INFO] 10.244.0.8:40149 - 36613 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062401s
	[INFO] 10.244.0.8:46127 - 8691 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123097s
	[INFO] 10.244.0.8:46127 - 8869 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062812s
	[INFO] 10.244.0.23:59103 - 54857 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000386488s
	[INFO] 10.244.0.23:58759 - 59506 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016469s
	[INFO] 10.244.0.23:50497 - 18798 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121767s
	[INFO] 10.244.0.23:55982 - 431 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000068456s
	[INFO] 10.244.0.23:43120 - 42108 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008274s
	[INFO] 10.244.0.23:58743 - 18360 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101092s
	[INFO] 10.244.0.23:44289 - 1777 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002355447s
	[INFO] 10.244.0.23:46347 - 40603 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002652644s
	[INFO] 10.244.0.26:56708 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000493883s
	[INFO] 10.244.0.26:41320 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120704s
	
	
	==> describe nodes <==
	Name:               addons-602145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-602145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=addons-602145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T00_44_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-602145
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 00:44:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-602145
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 00:49:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 00:48:07 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 00:48:07 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 00:48:07 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 00:48:07 +0000   Sat, 26 Oct 2024 00:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    addons-602145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fd70c4d5df949d7a6badbd5665220d2
	  System UUID:                8fd70c4d-5df9-49d7-a6ba-dbd5665220d2
	  Boot ID:                    9806ef21-44bc-4e2d-a83d-b2708cb9617e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     hello-world-app-55bf9c44b4-kslk2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-5pbh4    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m53s
	  kube-system                 amd-gpu-device-plugin-j7hfs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 coredns-7c65d6cfc9-rg759                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m1s
	  kube-system                 etcd-addons-602145                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m6s
	  kube-system                 kube-apiserver-addons-602145                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-addons-602145        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-proxy-zmp9p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-602145                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 metrics-server-84c5f94fbc-h4pf5              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-602145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-602145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node addons-602145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m6s (x2 over 5m7s)    kubelet          Node addons-602145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x2 over 5m7s)    kubelet          Node addons-602145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x2 over 5m7s)    kubelet          Node addons-602145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m5s                   kubelet          Node addons-602145 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node addons-602145 event: Registered Node addons-602145 in Controller
	
	
	==> dmesg <==
	[  +5.822416] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.155867] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.169040] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.151073] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.079085] kauditd_printk_skb: 72 callbacks suppressed
	[Oct26 00:45] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.391242] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.135044] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.740372] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.122217] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.924157] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.255760] kauditd_printk_skb: 7 callbacks suppressed
	[Oct26 00:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +49.236855] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.008183] kauditd_printk_skb: 2 callbacks suppressed
	[Oct26 00:47] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.677992] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.188821] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.281850] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.403719] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.085636] kauditd_printk_skb: 37 callbacks suppressed
	[ +21.586481] kauditd_printk_skb: 2 callbacks suppressed
	[Oct26 00:48] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.876611] kauditd_printk_skb: 7 callbacks suppressed
	[Oct26 00:49] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] <==
	{"level":"info","ts":"2024-10-26T00:45:52.928886Z","caller":"traceutil/trace.go:171","msg":"trace[1775187850] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5; range_end:; response_count:1; response_revision:1132; }","duration":"100.038339ms","start":"2024-10-26T00:45:52.828840Z","end":"2024-10-26T00:45:52.928879Z","steps":["trace[1775187850] 'agreement among raft nodes before linearized reading'  (duration: 99.981705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:45:52.928959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.13614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:45:52.928971Z","caller":"traceutil/trace.go:171","msg":"trace[1483416801] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"199.149694ms","start":"2024-10-26T00:45:52.729818Z","end":"2024-10-26T00:45:52.928967Z","steps":["trace[1483416801] 'agreement among raft nodes before linearized reading'  (duration: 199.129951ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:45:52.929058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.700659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:45:52.929077Z","caller":"traceutil/trace.go:171","msg":"trace[941261657] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"240.721132ms","start":"2024-10-26T00:45:52.688350Z","end":"2024-10-26T00:45:52.929071Z","steps":["trace[941261657] 'agreement among raft nodes before linearized reading'  (duration: 240.687986ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:46:01.202052Z","caller":"traceutil/trace.go:171","msg":"trace[424660994] linearizableReadLoop","detail":"{readStateIndex:1193; appliedIndex:1192; }","duration":"372.969749ms","start":"2024-10-26T00:46:00.829048Z","end":"2024-10-26T00:46:01.202018Z","steps":["trace[424660994] 'read index received'  (duration: 372.771857ms)","trace[424660994] 'applied index is now lower than readState.Index'  (duration: 197.233µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-26T00:46:01.202192Z","caller":"traceutil/trace.go:171","msg":"trace[1512070454] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"434.780856ms","start":"2024-10-26T00:46:00.767362Z","end":"2024-10-26T00:46:01.202143Z","steps":["trace[1512070454] 'process raft request'  (duration: 434.506289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202290Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.767340Z","time spent":"434.870295ms","remote":"127.0.0.1:57562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" mod_revision:1124 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" > >"}
	{"level":"warn","ts":"2024-10-26T00:46:01.202473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.443796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-10-26T00:46:01.202519Z","caller":"traceutil/trace.go:171","msg":"trace[2107659026] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5; range_end:; response_count:1; response_revision:1158; }","duration":"373.488533ms","start":"2024-10-26T00:46:00.829018Z","end":"2024-10-26T00:46:01.202506Z","steps":["trace[2107659026] 'agreement among raft nodes before linearized reading'  (duration: 373.353549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.828976Z","time spent":"373.559545ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4589,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5\" "}
	{"level":"warn","ts":"2024-10-26T00:46:01.202709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.367803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:46:01.202750Z","caller":"traceutil/trace.go:171","msg":"trace[702797044] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1158; }","duration":"343.409789ms","start":"2024-10-26T00:46:00.859334Z","end":"2024-10-26T00:46:01.202744Z","steps":["trace[702797044] 'agreement among raft nodes before linearized reading'  (duration: 343.356732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.859290Z","time spent":"343.473729ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-26T00:46:01.202849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.140277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-26T00:46:01.203561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.43912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-26T00:46:01.203660Z","caller":"traceutil/trace.go:171","msg":"trace[1272583878] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1158; }","duration":"229.541685ms","start":"2024-10-26T00:46:00.974109Z","end":"2024-10-26T00:46:01.203651Z","steps":["trace[1272583878] 'agreement among raft nodes before linearized reading'  (duration: 229.395064ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:46:01.202877Z","caller":"traceutil/trace.go:171","msg":"trace[1896670558] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1158; }","duration":"225.169446ms","start":"2024-10-26T00:46:00.977702Z","end":"2024-10-26T00:46:01.202871Z","steps":["trace[1896670558] 'agreement among raft nodes before linearized reading'  (duration: 225.131529ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:47:53.726718Z","caller":"traceutil/trace.go:171","msg":"trace[630696842] transaction","detail":"{read_only:false; response_revision:1696; number_of_response:1; }","duration":"543.229027ms","start":"2024-10-26T00:47:53.183440Z","end":"2024-10-26T00:47:53.726669Z","steps":["trace[630696842] 'process raft request'  (duration: 542.853235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:47:53.726993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:47:53.183426Z","time spent":"543.411058ms","remote":"127.0.0.1:57562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1690 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-26T00:47:53.727337Z","caller":"traceutil/trace.go:171","msg":"trace[1508805861] linearizableReadLoop","detail":"{readStateIndex:1764; appliedIndex:1764; }","duration":"438.615279ms","start":"2024-10-26T00:47:53.288701Z","end":"2024-10-26T00:47:53.727316Z","steps":["trace[1508805861] 'read index received'  (duration: 438.612318ms)","trace[1508805861] 'applied index is now lower than readState.Index'  (duration: 2.494µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T00:47:53.727425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.71083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:47:53.727463Z","caller":"traceutil/trace.go:171","msg":"trace[1481220585] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1696; }","duration":"438.757352ms","start":"2024-10-26T00:47:53.288697Z","end":"2024-10-26T00:47:53.727455Z","steps":["trace[1481220585] 'agreement among raft nodes before linearized reading'  (duration: 438.679686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:47:53.727497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:47:53.288665Z","time spent":"438.825981ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-26T00:47:53.733724Z","caller":"traceutil/trace.go:171","msg":"trace[1113148110] transaction","detail":"{read_only:false; response_revision:1697; number_of_response:1; }","duration":"263.243814ms","start":"2024-10-26T00:47:53.470469Z","end":"2024-10-26T00:47:53.733713Z","steps":["trace[1113148110] 'process raft request'  (duration: 263.178473ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:49:38 up 5 min,  0 users,  load average: 0.30, 0.76, 0.43
	Linux addons-602145 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] <==
	E1026 00:46:23.181687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.115.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.115.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.115.151:443: connect: connection refused" logger="UnhandledError"
	I1026 00:46:23.253271       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 00:46:48.392559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.207:8443->192.168.39.1:57720: use of closed network connection
	E1026 00:46:48.567953       1 conn.go:339] Error on socket receive: read tcp 192.168.39.207:8443->192.168.39.1:57752: use of closed network connection
	I1026 00:46:57.572332       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.223.238"}
	I1026 00:47:03.552134       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1026 00:47:04.692366       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1026 00:47:15.695350       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 00:47:15.873063       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.125.48"}
	E1026 00:47:46.184324       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1026 00:48:01.354681       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 00:48:21.482370       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.485759       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.517099       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.519247       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.526923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.534421       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.605103       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.605189       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.657221       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.657265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1026 00:48:22.605332       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1026 00:48:22.659513       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1026 00:48:22.673063       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1026 00:49:37.301316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.186.193"}
	
	
	==> kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] <==
	I1026 00:48:36.857854       1 shared_informer.go:320] Caches are synced for resource quota
	I1026 00:48:37.306283       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1026 00:48:37.306354       1 shared_informer.go:320] Caches are synced for garbage collector
	W1026 00:48:39.466353       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:48:39.466436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:48:40.279070       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:48:40.279236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:48:41.260028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:48:41.260110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:48:52.603474       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:48:52.603537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:48:54.791738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:48:54.791798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:49:01.163670       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:49:01.163725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:49:02.727924       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:49:02.728050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:49:23.845320       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:49:23.845552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:49:30.907965       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:49:30.908092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1026 00:49:37.114780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.372418ms"
	I1026 00:49:37.129437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.55051ms"
	I1026 00:49:37.149803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.258353ms"
	I1026 00:49:37.149958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.556µs"
	
	
	==> kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 00:44:39.155819       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 00:44:39.171972       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.207"]
	E1026 00:44:39.172047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 00:44:39.251183       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 00:44:39.251240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 00:44:39.251274       1 server_linux.go:169] "Using iptables Proxier"
	I1026 00:44:39.256651       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 00:44:39.256918       1 server.go:483] "Version info" version="v1.31.2"
	I1026 00:44:39.256933       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:44:39.258554       1 config.go:199] "Starting service config controller"
	I1026 00:44:39.258565       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 00:44:39.258587       1 config.go:105] "Starting endpoint slice config controller"
	I1026 00:44:39.258591       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 00:44:39.258973       1 config.go:328] "Starting node config controller"
	I1026 00:44:39.258983       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 00:44:39.358662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 00:44:39.358664       1 shared_informer.go:320] Caches are synced for service config
	I1026 00:44:39.359019       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] <==
	W1026 00:44:29.847722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:44:29.847813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.863558       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:29.863642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.913398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:44:29.913542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.989962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 00:44:29.990009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.081203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:44:30.081247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.083106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:30.083242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.111289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:30.111396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.168045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 00:44:30.168182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.182712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:44:30.182756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.212887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:44:30.212985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.318962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:44:30.319068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.358977       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 00:44:30.359101       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1026 00:44:33.502316       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 00:49:32 addons-602145 kubelet[1194]: E1026 00:49:32.216612    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903772216066670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:49:32 addons-602145 kubelet[1194]: E1026 00:49:32.216649    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903772216066670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.118920    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-external-health-monitor-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.118971    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-snapshotter"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.118979    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e55a42-07fd-48a7-bef4-fbe602d75ff1" containerName="volume-snapshot-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.118986    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="hostpath"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.118994    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e1b66b2-0ebb-466b-b1e0-c1f43ef21b9d" containerName="task-pv-container"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119000    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="liveness-probe"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119006    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0" containerName="csi-attacher"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119012    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e305542d-5cae-4b7b-b8eb-8746838c449a" containerName="csi-resizer"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119022    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="node-driver-registrar"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119028    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-provisioner"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: E1026 00:49:37.119034    1194 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88ad95c2-df86-4bf5-b748-a0356c7d9668" containerName="volume-snapshot-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119080    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="node-driver-registrar"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119088    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1b66b2-0ebb-466b-b1e0-c1f43ef21b9d" containerName="task-pv-container"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119093    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ad95c2-df86-4bf5-b748-a0356c7d9668" containerName="volume-snapshot-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119100    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-snapshotter"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119104    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="hostpath"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119108    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="liveness-probe"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119113    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e55a42-07fd-48a7-bef4-fbe602d75ff1" containerName="volume-snapshot-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119118    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0" containerName="csi-attacher"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119123    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="e305542d-5cae-4b7b-b8eb-8746838c449a" containerName="csi-resizer"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119127    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-provisioner"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.119132    1194 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c681fc4-5331-4a8c-8836-434972b7501f" containerName="csi-external-health-monitor-controller"
	Oct 26 00:49:37 addons-602145 kubelet[1194]: I1026 00:49:37.176415    1194 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cpx9\" (UniqueName: \"kubernetes.io/projected/d68d2841-2c34-4251-9041-77f91bc8ae5a-kube-api-access-8cpx9\") pod \"hello-world-app-55bf9c44b4-kslk2\" (UID: \"d68d2841-2c34-4251-9041-77f91bc8ae5a\") " pod="default/hello-world-app-55bf9c44b4-kslk2"
	
	
	==> storage-provisioner [a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e] <==
	I1026 00:44:44.883768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:44:45.125074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:44:45.152428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:44:45.196556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:44:45.196784       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-602145_344be09d-51c7-4147-a809-375a65a491de!
	I1026 00:44:45.196835       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31e0f04d-eb9c-4d94-9942-69ec8f9e4cfa", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-602145_344be09d-51c7-4147-a809-375a65a491de became leader
	I1026 00:44:45.298279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-602145_344be09d-51c7-4147-a809-375a65a491de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-602145 -n addons-602145
helpers_test.go:261: (dbg) Run:  kubectl --context addons-602145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-kslk2 ingress-nginx-admission-create-2rtmc ingress-nginx-admission-patch-9hm7l
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-602145 describe pod hello-world-app-55bf9c44b4-kslk2 ingress-nginx-admission-create-2rtmc ingress-nginx-admission-patch-9hm7l
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-602145 describe pod hello-world-app-55bf9c44b4-kslk2 ingress-nginx-admission-create-2rtmc ingress-nginx-admission-patch-9hm7l: exit status 1 (66.411524ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-kslk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-602145/192.168.39.207
	Start Time:       Sat, 26 Oct 2024 00:49:37 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cpx9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cpx9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-kslk2 to addons-602145
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2rtmc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9hm7l" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-602145 describe pod hello-world-app-55bf9c44b4-kslk2 ingress-nginx-admission-create-2rtmc ingress-nginx-admission-patch-9hm7l: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable ingress-dns --alsologtostderr -v=1: (1.677797582s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable ingress --alsologtostderr -v=1: (7.67121403s)
--- FAIL: TestAddons/parallel/Ingress (153.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (347.01s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.534324ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h4pf5" [d14866cc-8862-49b0-991e-5bebca6ba0c0] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003491613s
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (83.052177ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 2m23.948525076s

                                                
                                                
** /stderr **
I1026 00:47:02.950388   17615 retry.go:31] will retry after 2.67306533s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (144.691403ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 2m26.76762006s

                                                
                                                
** /stderr **
I1026 00:47:05.769262   17615 retry.go:31] will retry after 5.220916453s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (63.732334ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 2m32.052787284s

                                                
                                                
** /stderr **
I1026 00:47:11.054273   17615 retry.go:31] will retry after 8.579602092s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (66.25564ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 2m40.699062778s

                                                
                                                
** /stderr **
I1026 00:47:19.701029   17615 retry.go:31] will retry after 8.239747545s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (63.119898ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 2m49.002493631s

                                                
                                                
** /stderr **
I1026 00:47:28.004179   17615 retry.go:31] will retry after 17.014487315s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (62.342706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 3m6.079985723s

                                                
                                                
** /stderr **
I1026 00:47:45.081669   17615 retry.go:31] will retry after 16.808317167s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (65.335855ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 3m22.954846475s

                                                
                                                
** /stderr **
I1026 00:48:01.956542   17615 retry.go:31] will retry after 39.364426849s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (61.01176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 4m2.383556318s

                                                
                                                
** /stderr **
I1026 00:48:41.385326   17615 retry.go:31] will retry after 57.919968795s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (74.394789ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 5m0.378784501s

                                                
                                                
** /stderr **
I1026 00:49:39.380360   17615 retry.go:31] will retry after 32.080630832s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (61.699836ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 5m32.524635447s

                                                
                                                
** /stderr **
I1026 00:50:11.526681   17615 retry.go:31] will retry after 44.488199122s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (62.802372ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 6m17.076252209s

                                                
                                                
** /stderr **
I1026 00:50:56.077981   17615 retry.go:31] will retry after 1m11.616078036s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (63.173954ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 7m28.756634731s

                                                
                                                
** /stderr **
I1026 00:52:07.758395   17615 retry.go:31] will retry after 33.558619508s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-602145 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-602145 top pods -n kube-system: exit status 1 (61.603862ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-j7hfs, age: 8m2.379577048s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-602145 -n addons-602145
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 logs -n 25: (1.161039781s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-699862                                                                     | download-only-699862 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-798188                                                                     | download-only-798188 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-422612 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | binary-mirror-422612                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37063                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-422612                                                                     | binary-mirror-422612 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| addons  | enable dashboard -p                                                                         | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | addons-602145                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | addons-602145                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-602145 --wait=true                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:46 UTC | 26 Oct 24 00:46 UTC |
	|         | -p addons-602145                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-602145 ip                                                                            | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-602145 ssh curl -s                                                                   | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-602145 ssh cat                                                                       | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | /opt/local-path-provisioner/pvc-323584fd-5eeb-4dce-983c-67e6333a4dfe_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:47 UTC | 26 Oct 24 00:47 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:48 UTC | 26 Oct 24 00:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-602145 addons                                                                        | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:48 UTC | 26 Oct 24 00:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-602145 ip                                                                            | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:49 UTC | 26 Oct 24 00:49 UTC |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:49 UTC | 26 Oct 24 00:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-602145 addons disable                                                                | addons-602145        | jenkins | v1.34.0 | 26 Oct 24 00:49 UTC | 26 Oct 24 00:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:55.614406   18362 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:55.614530   18362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:55.614539   18362 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:55.614544   18362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:55.614714   18362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:43:55.615270   18362 out.go:352] Setting JSON to false
	I1026 00:43:55.616067   18362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1576,"bootTime":1729901860,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:43:55.616123   18362 start.go:139] virtualization: kvm guest
	I1026 00:43:55.617880   18362 out.go:177] * [addons-602145] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:43:55.619108   18362 notify.go:220] Checking for updates...
	I1026 00:43:55.619121   18362 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:43:55.620411   18362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:55.621634   18362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:43:55.622772   18362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:55.623847   18362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:43:55.625354   18362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:43:55.626552   18362 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:55.657051   18362 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:43:55.658151   18362 start.go:297] selected driver: kvm2
	I1026 00:43:55.658164   18362 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:43:55.658176   18362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:43:55.659096   18362 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:55.659181   18362 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:43:55.674226   18362 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:43:55.674278   18362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:55.674580   18362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:43:55.674612   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:43:55.674697   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:43:55.674709   18362 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:55.674775   18362 start.go:340] cluster config:
	{Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:55.674947   18362 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:55.676738   18362 out.go:177] * Starting "addons-602145" primary control-plane node in "addons-602145" cluster
	I1026 00:43:55.677910   18362 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:43:55.677939   18362 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:43:55.677952   18362 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:55.678018   18362 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:43:55.678029   18362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:43:55.678335   18362 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json ...
	I1026 00:43:55.678356   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json: {Name:mk8d11eb76abf3e32b46f47b73cd48b347338ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:43:55.678473   18362 start.go:360] acquireMachinesLock for addons-602145: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:43:55.678513   18362 start.go:364] duration metric: took 29.027µs to acquireMachinesLock for "addons-602145"
	I1026 00:43:55.678529   18362 start.go:93] Provisioning new machine with config: &{Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:43:55.678580   18362 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:43:55.680197   18362 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1026 00:43:55.680311   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:43:55.680351   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:43:55.694416   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1026 00:43:55.694791   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:43:55.695295   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:43:55.695315   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:43:55.695693   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:43:55.695868   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:43:55.696001   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:43:55.696160   18362 start.go:159] libmachine.API.Create for "addons-602145" (driver="kvm2")
	I1026 00:43:55.696200   18362 client.go:168] LocalClient.Create starting
	I1026 00:43:55.696248   18362 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:43:55.815059   18362 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:43:55.950771   18362 main.go:141] libmachine: Running pre-create checks...
	I1026 00:43:55.950795   18362 main.go:141] libmachine: (addons-602145) Calling .PreCreateCheck
	I1026 00:43:55.951337   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:43:55.951765   18362 main.go:141] libmachine: Creating machine...
	I1026 00:43:55.951779   18362 main.go:141] libmachine: (addons-602145) Calling .Create
	I1026 00:43:55.951920   18362 main.go:141] libmachine: (addons-602145) Creating KVM machine...
	I1026 00:43:55.953140   18362 main.go:141] libmachine: (addons-602145) DBG | found existing default KVM network
	I1026 00:43:55.953854   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:55.953704   18383 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a40}
	I1026 00:43:55.953898   18362 main.go:141] libmachine: (addons-602145) DBG | created network xml: 
	I1026 00:43:55.953922   18362 main.go:141] libmachine: (addons-602145) DBG | <network>
	I1026 00:43:55.953935   18362 main.go:141] libmachine: (addons-602145) DBG |   <name>mk-addons-602145</name>
	I1026 00:43:55.953948   18362 main.go:141] libmachine: (addons-602145) DBG |   <dns enable='no'/>
	I1026 00:43:55.953957   18362 main.go:141] libmachine: (addons-602145) DBG |   
	I1026 00:43:55.953972   18362 main.go:141] libmachine: (addons-602145) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:43:55.954003   18362 main.go:141] libmachine: (addons-602145) DBG |     <dhcp>
	I1026 00:43:55.954029   18362 main.go:141] libmachine: (addons-602145) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:43:55.954041   18362 main.go:141] libmachine: (addons-602145) DBG |     </dhcp>
	I1026 00:43:55.954050   18362 main.go:141] libmachine: (addons-602145) DBG |   </ip>
	I1026 00:43:55.954059   18362 main.go:141] libmachine: (addons-602145) DBG |   
	I1026 00:43:55.954067   18362 main.go:141] libmachine: (addons-602145) DBG | </network>
	I1026 00:43:55.954081   18362 main.go:141] libmachine: (addons-602145) DBG | 
	I1026 00:43:55.959369   18362 main.go:141] libmachine: (addons-602145) DBG | trying to create private KVM network mk-addons-602145 192.168.39.0/24...
	I1026 00:43:56.022338   18362 main.go:141] libmachine: (addons-602145) DBG | private KVM network mk-addons-602145 192.168.39.0/24 created
	I1026 00:43:56.022368   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.022296   18383 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:56.022387   18362 main.go:141] libmachine: (addons-602145) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 ...
	I1026 00:43:56.022407   18362 main.go:141] libmachine: (addons-602145) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:43:56.022465   18362 main.go:141] libmachine: (addons-602145) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:43:56.286340   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.286214   18383 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa...
	I1026 00:43:56.501719   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.501588   18383 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/addons-602145.rawdisk...
	I1026 00:43:56.501745   18362 main.go:141] libmachine: (addons-602145) DBG | Writing magic tar header
	I1026 00:43:56.501754   18362 main.go:141] libmachine: (addons-602145) DBG | Writing SSH key tar header
	I1026 00:43:56.501761   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:56.501706   18383 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 ...
	I1026 00:43:56.501851   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145
	I1026 00:43:56.501878   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:43:56.501894   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145 (perms=drwx------)
	I1026 00:43:56.501905   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:56.501915   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:43:56.501924   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:43:56.501947   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:43:56.501960   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:43:56.501970   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:43:56.501979   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:43:56.501992   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:43:56.502000   18362 main.go:141] libmachine: (addons-602145) DBG | Checking permissions on dir: /home
	I1026 00:43:56.502015   18362 main.go:141] libmachine: (addons-602145) DBG | Skipping /home - not owner
	I1026 00:43:56.502024   18362 main.go:141] libmachine: (addons-602145) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:43:56.502028   18362 main.go:141] libmachine: (addons-602145) Creating domain...
	I1026 00:43:56.503084   18362 main.go:141] libmachine: (addons-602145) define libvirt domain using xml: 
	I1026 00:43:56.503109   18362 main.go:141] libmachine: (addons-602145) <domain type='kvm'>
	I1026 00:43:56.503124   18362 main.go:141] libmachine: (addons-602145)   <name>addons-602145</name>
	I1026 00:43:56.503140   18362 main.go:141] libmachine: (addons-602145)   <memory unit='MiB'>4000</memory>
	I1026 00:43:56.503150   18362 main.go:141] libmachine: (addons-602145)   <vcpu>2</vcpu>
	I1026 00:43:56.503159   18362 main.go:141] libmachine: (addons-602145)   <features>
	I1026 00:43:56.503178   18362 main.go:141] libmachine: (addons-602145)     <acpi/>
	I1026 00:43:56.503199   18362 main.go:141] libmachine: (addons-602145)     <apic/>
	I1026 00:43:56.503216   18362 main.go:141] libmachine: (addons-602145)     <pae/>
	I1026 00:43:56.503234   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503247   18362 main.go:141] libmachine: (addons-602145)   </features>
	I1026 00:43:56.503257   18362 main.go:141] libmachine: (addons-602145)   <cpu mode='host-passthrough'>
	I1026 00:43:56.503266   18362 main.go:141] libmachine: (addons-602145)   
	I1026 00:43:56.503276   18362 main.go:141] libmachine: (addons-602145)   </cpu>
	I1026 00:43:56.503286   18362 main.go:141] libmachine: (addons-602145)   <os>
	I1026 00:43:56.503295   18362 main.go:141] libmachine: (addons-602145)     <type>hvm</type>
	I1026 00:43:56.503306   18362 main.go:141] libmachine: (addons-602145)     <boot dev='cdrom'/>
	I1026 00:43:56.503319   18362 main.go:141] libmachine: (addons-602145)     <boot dev='hd'/>
	I1026 00:43:56.503334   18362 main.go:141] libmachine: (addons-602145)     <bootmenu enable='no'/>
	I1026 00:43:56.503348   18362 main.go:141] libmachine: (addons-602145)   </os>
	I1026 00:43:56.503357   18362 main.go:141] libmachine: (addons-602145)   <devices>
	I1026 00:43:56.503362   18362 main.go:141] libmachine: (addons-602145)     <disk type='file' device='cdrom'>
	I1026 00:43:56.503382   18362 main.go:141] libmachine: (addons-602145)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/boot2docker.iso'/>
	I1026 00:43:56.503390   18362 main.go:141] libmachine: (addons-602145)       <target dev='hdc' bus='scsi'/>
	I1026 00:43:56.503395   18362 main.go:141] libmachine: (addons-602145)       <readonly/>
	I1026 00:43:56.503401   18362 main.go:141] libmachine: (addons-602145)     </disk>
	I1026 00:43:56.503410   18362 main.go:141] libmachine: (addons-602145)     <disk type='file' device='disk'>
	I1026 00:43:56.503421   18362 main.go:141] libmachine: (addons-602145)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:43:56.503440   18362 main.go:141] libmachine: (addons-602145)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/addons-602145.rawdisk'/>
	I1026 00:43:56.503457   18362 main.go:141] libmachine: (addons-602145)       <target dev='hda' bus='virtio'/>
	I1026 00:43:56.503471   18362 main.go:141] libmachine: (addons-602145)     </disk>
	I1026 00:43:56.503483   18362 main.go:141] libmachine: (addons-602145)     <interface type='network'>
	I1026 00:43:56.503495   18362 main.go:141] libmachine: (addons-602145)       <source network='mk-addons-602145'/>
	I1026 00:43:56.503505   18362 main.go:141] libmachine: (addons-602145)       <model type='virtio'/>
	I1026 00:43:56.503515   18362 main.go:141] libmachine: (addons-602145)     </interface>
	I1026 00:43:56.503525   18362 main.go:141] libmachine: (addons-602145)     <interface type='network'>
	I1026 00:43:56.503542   18362 main.go:141] libmachine: (addons-602145)       <source network='default'/>
	I1026 00:43:56.503557   18362 main.go:141] libmachine: (addons-602145)       <model type='virtio'/>
	I1026 00:43:56.503569   18362 main.go:141] libmachine: (addons-602145)     </interface>
	I1026 00:43:56.503578   18362 main.go:141] libmachine: (addons-602145)     <serial type='pty'>
	I1026 00:43:56.503589   18362 main.go:141] libmachine: (addons-602145)       <target port='0'/>
	I1026 00:43:56.503596   18362 main.go:141] libmachine: (addons-602145)     </serial>
	I1026 00:43:56.503602   18362 main.go:141] libmachine: (addons-602145)     <console type='pty'>
	I1026 00:43:56.503618   18362 main.go:141] libmachine: (addons-602145)       <target type='serial' port='0'/>
	I1026 00:43:56.503628   18362 main.go:141] libmachine: (addons-602145)     </console>
	I1026 00:43:56.503637   18362 main.go:141] libmachine: (addons-602145)     <rng model='virtio'>
	I1026 00:43:56.503653   18362 main.go:141] libmachine: (addons-602145)       <backend model='random'>/dev/random</backend>
	I1026 00:43:56.503678   18362 main.go:141] libmachine: (addons-602145)     </rng>
	I1026 00:43:56.503693   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503701   18362 main.go:141] libmachine: (addons-602145)     
	I1026 00:43:56.503706   18362 main.go:141] libmachine: (addons-602145)   </devices>
	I1026 00:43:56.503714   18362 main.go:141] libmachine: (addons-602145) </domain>
	I1026 00:43:56.503723   18362 main.go:141] libmachine: (addons-602145) 
	I1026 00:43:56.509222   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c9:0b:50 in network default
	I1026 00:43:56.509751   18362 main.go:141] libmachine: (addons-602145) Ensuring networks are active...
	I1026 00:43:56.509768   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:56.510402   18362 main.go:141] libmachine: (addons-602145) Ensuring network default is active
	I1026 00:43:56.510731   18362 main.go:141] libmachine: (addons-602145) Ensuring network mk-addons-602145 is active
	I1026 00:43:56.511210   18362 main.go:141] libmachine: (addons-602145) Getting domain xml...
	I1026 00:43:56.511787   18362 main.go:141] libmachine: (addons-602145) Creating domain...
	I1026 00:43:57.889883   18362 main.go:141] libmachine: (addons-602145) Waiting to get IP...
	I1026 00:43:57.890736   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:57.891169   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:57.891232   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:57.891176   18383 retry.go:31] will retry after 198.139157ms: waiting for machine to come up
	I1026 00:43:58.090417   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.090774   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.090812   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.090764   18383 retry.go:31] will retry after 324.888481ms: waiting for machine to come up
	I1026 00:43:58.417469   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.417887   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.417928   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.417842   18383 retry.go:31] will retry after 294.424781ms: waiting for machine to come up
	I1026 00:43:58.714356   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:58.714746   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:58.714775   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:58.714706   18383 retry.go:31] will retry after 519.90861ms: waiting for machine to come up
	I1026 00:43:59.236542   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:59.236895   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:59.236929   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:59.236866   18383 retry.go:31] will retry after 592.882017ms: waiting for machine to come up
	I1026 00:43:59.831579   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:43:59.832004   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:43:59.832026   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:43:59.831957   18383 retry.go:31] will retry after 902.357908ms: waiting for machine to come up
	I1026 00:44:00.735715   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:00.736126   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:00.736149   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:00.736091   18383 retry.go:31] will retry after 1.1727963s: waiting for machine to come up
	I1026 00:44:01.910538   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:01.911001   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:01.911029   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:01.910950   18383 retry.go:31] will retry after 1.229780318s: waiting for machine to come up
	I1026 00:44:03.142273   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:03.142619   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:03.142646   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:03.142555   18383 retry.go:31] will retry after 1.794501043s: waiting for machine to come up
	I1026 00:44:04.939417   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:04.939681   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:04.939704   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:04.939638   18383 retry.go:31] will retry after 1.740655734s: waiting for machine to come up
	I1026 00:44:06.681963   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:06.682436   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:06.682461   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:06.682396   18383 retry.go:31] will retry after 2.565591967s: waiting for machine to come up
	I1026 00:44:09.251163   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:09.251533   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:09.251556   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:09.251499   18383 retry.go:31] will retry after 3.368747645s: waiting for machine to come up
	I1026 00:44:12.622506   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:12.622788   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find current IP address of domain addons-602145 in network mk-addons-602145
	I1026 00:44:12.622817   18362 main.go:141] libmachine: (addons-602145) DBG | I1026 00:44:12.622743   18383 retry.go:31] will retry after 3.25115137s: waiting for machine to come up
	I1026 00:44:15.875930   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.876313   18362 main.go:141] libmachine: (addons-602145) Found IP for machine: 192.168.39.207
	I1026 00:44:15.876352   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has current primary IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.876361   18362 main.go:141] libmachine: (addons-602145) Reserving static IP address...
	I1026 00:44:15.876690   18362 main.go:141] libmachine: (addons-602145) DBG | unable to find host DHCP lease matching {name: "addons-602145", mac: "52:54:00:c1:12:e0", ip: "192.168.39.207"} in network mk-addons-602145
	I1026 00:44:15.946580   18362 main.go:141] libmachine: (addons-602145) Reserved static IP address: 192.168.39.207
	I1026 00:44:15.946617   18362 main.go:141] libmachine: (addons-602145) Waiting for SSH to be available...
	I1026 00:44:15.946626   18362 main.go:141] libmachine: (addons-602145) DBG | Getting to WaitForSSH function...
	I1026 00:44:15.949198   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.949664   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:15.949694   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:15.949919   18362 main.go:141] libmachine: (addons-602145) DBG | Using SSH client type: external
	I1026 00:44:15.949932   18362 main.go:141] libmachine: (addons-602145) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa (-rw-------)
	I1026 00:44:15.949968   18362 main.go:141] libmachine: (addons-602145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 00:44:15.949990   18362 main.go:141] libmachine: (addons-602145) DBG | About to run SSH command:
	I1026 00:44:15.950001   18362 main.go:141] libmachine: (addons-602145) DBG | exit 0
	I1026 00:44:16.077239   18362 main.go:141] libmachine: (addons-602145) DBG | SSH cmd err, output: <nil>: 
	I1026 00:44:16.077586   18362 main.go:141] libmachine: (addons-602145) KVM machine creation complete!
	I1026 00:44:16.077868   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:44:16.078412   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:16.078561   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:16.078688   18362 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 00:44:16.078705   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:16.079985   18362 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 00:44:16.079998   18362 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 00:44:16.080002   18362 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 00:44:16.080008   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.082144   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.082451   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.082471   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.082599   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.082780   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.082930   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.083044   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.083182   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.083354   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.083363   18362 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 00:44:16.180435   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:44:16.180459   18362 main.go:141] libmachine: Detecting the provisioner...
	I1026 00:44:16.180466   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.183346   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.183683   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.183725   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.183875   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.184062   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.184220   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.184359   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.184479   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.184680   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.184692   18362 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 00:44:16.281574   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 00:44:16.281693   18362 main.go:141] libmachine: found compatible host: buildroot
	I1026 00:44:16.281708   18362 main.go:141] libmachine: Provisioning with buildroot...
	I1026 00:44:16.281718   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.281944   18362 buildroot.go:166] provisioning hostname "addons-602145"
	I1026 00:44:16.281973   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.282147   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.284487   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.284809   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.284828   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.284943   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.285111   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.285247   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.285371   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.285551   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.285723   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.285735   18362 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-602145 && echo "addons-602145" | sudo tee /etc/hostname
	I1026 00:44:16.400619   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-602145
	
	I1026 00:44:16.400650   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.403067   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.403376   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.403412   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.403537   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.403705   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.403866   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.403961   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.404102   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.404260   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.404274   18362 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-602145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-602145/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-602145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 00:44:16.509123   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 00:44:16.509157   18362 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 00:44:16.509175   18362 buildroot.go:174] setting up certificates
	I1026 00:44:16.509185   18362 provision.go:84] configureAuth start
	I1026 00:44:16.509193   18362 main.go:141] libmachine: (addons-602145) Calling .GetMachineName
	I1026 00:44:16.509480   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:16.511898   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.512164   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.512192   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.512296   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.514231   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.514585   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.514612   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.514711   18362 provision.go:143] copyHostCerts
	I1026 00:44:16.514800   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 00:44:16.514918   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 00:44:16.514999   18362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 00:44:16.515065   18362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.addons-602145 san=[127.0.0.1 192.168.39.207 addons-602145 localhost minikube]
	I1026 00:44:16.681734   18362 provision.go:177] copyRemoteCerts
	I1026 00:44:16.681795   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 00:44:16.681816   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.684306   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.684602   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.684623   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.684844   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.685039   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.685186   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.685286   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:16.762584   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 00:44:16.784173   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 00:44:16.805183   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 00:44:16.826070   18362 provision.go:87] duration metric: took 316.87402ms to configureAuth
	I1026 00:44:16.826101   18362 buildroot.go:189] setting minikube options for container-runtime
	I1026 00:44:16.826293   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:16.826378   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:16.828731   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.829026   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:16.829046   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:16.829208   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:16.829365   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.829500   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:16.829611   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:16.829743   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:16.829935   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:16.829952   18362 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 00:44:17.044788   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 00:44:17.044815   18362 main.go:141] libmachine: Checking connection to Docker...
	I1026 00:44:17.044822   18362 main.go:141] libmachine: (addons-602145) Calling .GetURL
	I1026 00:44:17.046228   18362 main.go:141] libmachine: (addons-602145) DBG | Using libvirt version 6000000
	I1026 00:44:17.048406   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.048743   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.048771   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.048897   18362 main.go:141] libmachine: Docker is up and running!
	I1026 00:44:17.048909   18362 main.go:141] libmachine: Reticulating splines...
	I1026 00:44:17.048915   18362 client.go:171] duration metric: took 21.35270457s to LocalClient.Create
	I1026 00:44:17.048936   18362 start.go:167] duration metric: took 21.352777514s to libmachine.API.Create "addons-602145"
	I1026 00:44:17.048950   18362 start.go:293] postStartSetup for "addons-602145" (driver="kvm2")
	I1026 00:44:17.048962   18362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 00:44:17.048978   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.049178   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 00:44:17.049206   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.051103   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.051466   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.051491   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.051603   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.051758   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.051878   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.051983   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.130951   18362 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 00:44:17.134727   18362 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 00:44:17.134753   18362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 00:44:17.134824   18362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 00:44:17.134847   18362 start.go:296] duration metric: took 85.889764ms for postStartSetup
	I1026 00:44:17.134876   18362 main.go:141] libmachine: (addons-602145) Calling .GetConfigRaw
	I1026 00:44:17.135429   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:17.137786   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.138127   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.138153   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.138350   18362 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/config.json ...
	I1026 00:44:17.138517   18362 start.go:128] duration metric: took 21.45992765s to createHost
	I1026 00:44:17.138537   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.140745   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.141024   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.141064   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.141220   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.141371   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.141528   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.141641   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.141775   18362 main.go:141] libmachine: Using SSH client type: native
	I1026 00:44:17.141968   18362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I1026 00:44:17.141978   18362 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 00:44:17.241636   18362 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729903457.215519488
	
	I1026 00:44:17.241658   18362 fix.go:216] guest clock: 1729903457.215519488
	I1026 00:44:17.241665   18362 fix.go:229] Guest: 2024-10-26 00:44:17.215519488 +0000 UTC Remote: 2024-10-26 00:44:17.138527799 +0000 UTC m=+21.559650378 (delta=76.991689ms)
	I1026 00:44:17.241694   18362 fix.go:200] guest clock delta is within tolerance: 76.991689ms
	I1026 00:44:17.241699   18362 start.go:83] releasing machines lock for "addons-602145", held for 21.563176612s
	I1026 00:44:17.241717   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.241948   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:17.244474   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.244802   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.244828   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.244956   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245372   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245651   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:17.245741   18362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 00:44:17.245787   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.245857   18362 ssh_runner.go:195] Run: cat /version.json
	I1026 00:44:17.245868   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:17.248370   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248552   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248690   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.248711   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.248878   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.248893   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:17.248915   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:17.249035   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.249081   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:17.249158   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.249271   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:17.249274   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.249391   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:17.249524   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:17.371475   18362 ssh_runner.go:195] Run: systemctl --version
	I1026 00:44:17.377239   18362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 00:44:17.533030   18362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 00:44:17.539110   18362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 00:44:17.539170   18362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 00:44:17.556792   18362 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 00:44:17.556816   18362 start.go:495] detecting cgroup driver to use...
	I1026 00:44:17.556879   18362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 00:44:17.571840   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 00:44:17.585193   18362 docker.go:217] disabling cri-docker service (if available) ...
	I1026 00:44:17.585244   18362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 00:44:17.598450   18362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 00:44:17.611348   18362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 00:44:17.724975   18362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 00:44:17.860562   18362 docker.go:233] disabling docker service ...
	I1026 00:44:17.860624   18362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 00:44:17.878417   18362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 00:44:17.890621   18362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 00:44:18.027576   18362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 00:44:18.152826   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 00:44:18.165246   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 00:44:18.181792   18362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 00:44:18.181843   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.191166   18362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 00:44:18.191229   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.200643   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.210120   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.219499   18362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 00:44:18.229225   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.238769   18362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.254338   18362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 00:44:18.263553   18362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 00:44:18.271947   18362 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 00:44:18.271998   18362 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 00:44:18.283179   18362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 00:44:18.291951   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:18.411944   18362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 00:44:18.500474   18362 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 00:44:18.500561   18362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 00:44:18.505361   18362 start.go:563] Will wait 60s for crictl version
	I1026 00:44:18.505435   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:44:18.508746   18362 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 00:44:18.544203   18362 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 00:44:18.544314   18362 ssh_runner.go:195] Run: crio --version
	I1026 00:44:18.569896   18362 ssh_runner.go:195] Run: crio --version
	I1026 00:44:18.597852   18362 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 00:44:18.599187   18362 main.go:141] libmachine: (addons-602145) Calling .GetIP
	I1026 00:44:18.602535   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:18.602978   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:18.603007   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:18.603209   18362 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 00:44:18.606878   18362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:44:18.618164   18362 kubeadm.go:883] updating cluster {Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 00:44:18.618259   18362 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:44:18.618302   18362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:44:18.647501   18362 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 00:44:18.647556   18362 ssh_runner.go:195] Run: which lz4
	I1026 00:44:18.650977   18362 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 00:44:18.654650   18362 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 00:44:18.654688   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 00:44:19.704577   18362 crio.go:462] duration metric: took 1.05362861s to copy over tarball
	I1026 00:44:19.704656   18362 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 00:44:21.744004   18362 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039313463s)
	I1026 00:44:21.744029   18362 crio.go:469] duration metric: took 2.039426425s to extract the tarball
	I1026 00:44:21.744036   18362 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 00:44:21.779704   18362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 00:44:21.823505   18362 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 00:44:21.823530   18362 cache_images.go:84] Images are preloaded, skipping loading
	I1026 00:44:21.823539   18362 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.31.2 crio true true} ...
	I1026 00:44:21.823638   18362 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-602145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 00:44:21.823701   18362 ssh_runner.go:195] Run: crio config
	I1026 00:44:21.863753   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:44:21.863774   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:44:21.863785   18362 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 00:44:21.863806   18362 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-602145 NodeName:addons-602145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 00:44:21.863906   18362 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-602145"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.207"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 00:44:21.863970   18362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 00:44:21.873123   18362 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 00:44:21.873181   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 00:44:21.881926   18362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 00:44:21.897049   18362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 00:44:21.911620   18362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 00:44:21.926348   18362 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I1026 00:44:21.929745   18362 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 00:44:21.940587   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:22.050090   18362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 00:44:22.065281   18362 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145 for IP: 192.168.39.207
	I1026 00:44:22.065311   18362 certs.go:194] generating shared ca certs ...
	I1026 00:44:22.065330   18362 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.065512   18362 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 00:44:22.237379   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt ...
	I1026 00:44:22.237412   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt: {Name:mk3c127015e37380407dc6638ce54fc88c77b493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.237591   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key ...
	I1026 00:44:22.237601   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key: {Name:mk7de4df9acb036a6d7b414631e09603baf60c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.237672   18362 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 00:44:22.310306   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt ...
	I1026 00:44:22.310332   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt: {Name:mk9e3186936c323000cec16bc2f982aa6ac345e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.310472   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key ...
	I1026 00:44:22.310483   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key: {Name:mk12a79b4c0d797bf5c5e676c0e8da6a87984c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.310547   18362 certs.go:256] generating profile certs ...
	I1026 00:44:22.310594   18362 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key
	I1026 00:44:22.310609   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt with IP's: []
	I1026 00:44:22.414269   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt ...
	I1026 00:44:22.414299   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: {Name:mk59642db8b1e44c55a4b368b376e78b938381d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.414454   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key ...
	I1026 00:44:22.414464   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.key: {Name:mk8af73069dc8211099d6ba14c77d7dc56b20e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.414530   18362 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52
	I1026 00:44:22.414547   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.207]
	I1026 00:44:22.522754   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 ...
	I1026 00:44:22.522786   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52: {Name:mk1be9ecb2bf9b4a0cde6cb7c2493e966bffd8f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.522931   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52 ...
	I1026 00:44:22.522942   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52: {Name:mk0b09375294e59642b26e78c66ddf8850b79512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.523030   18362 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt.cbb7ad52 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt
	I1026 00:44:22.523109   18362 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key.cbb7ad52 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key
	I1026 00:44:22.523157   18362 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key
	I1026 00:44:22.523173   18362 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt with IP's: []
	I1026 00:44:22.799300   18362 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt ...
	I1026 00:44:22.799330   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt: {Name:mk183b421d2ac65e5dd1715a5fb93c0771ff3857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.799484   18362 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key ...
	I1026 00:44:22.799494   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key: {Name:mk8c565fbfea03d35d5b91237c40613d8e56f3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:22.799648   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 00:44:22.799685   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 00:44:22.799709   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 00:44:22.799732   18362 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 00:44:22.800321   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 00:44:22.828102   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 00:44:22.863802   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 00:44:22.885129   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 00:44:22.905574   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 00:44:22.926209   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 00:44:22.946806   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 00:44:22.967616   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 00:44:22.988355   18362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 00:44:23.008992   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 00:44:23.023720   18362 ssh_runner.go:195] Run: openssl version
	I1026 00:44:23.029273   18362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 00:44:23.039291   18362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.043401   18362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.043460   18362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 00:44:23.048759   18362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 00:44:23.058732   18362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 00:44:23.062382   18362 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 00:44:23.062436   18362 kubeadm.go:392] StartCluster: {Name:addons-602145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-602145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:44:23.062512   18362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 00:44:23.062556   18362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 00:44:23.095758   18362 cri.go:89] found id: ""
	I1026 00:44:23.095826   18362 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 00:44:23.104849   18362 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 00:44:23.113558   18362 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 00:44:23.122035   18362 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 00:44:23.122054   18362 kubeadm.go:157] found existing configuration files:
	
	I1026 00:44:23.122100   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 00:44:23.130298   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 00:44:23.130363   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 00:44:23.139045   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 00:44:23.147045   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 00:44:23.147092   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 00:44:23.155362   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 00:44:23.163233   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 00:44:23.163280   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 00:44:23.171864   18362 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 00:44:23.180038   18362 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 00:44:23.180106   18362 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 00:44:23.188383   18362 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 00:44:23.331565   18362 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 00:44:32.692523   18362 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 00:44:32.692576   18362 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 00:44:32.692701   18362 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 00:44:32.692843   18362 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 00:44:32.692931   18362 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 00:44:32.692984   18362 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 00:44:32.694199   18362 out.go:235]   - Generating certificates and keys ...
	I1026 00:44:32.694278   18362 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 00:44:32.694371   18362 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 00:44:32.694467   18362 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 00:44:32.694556   18362 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 00:44:32.694636   18362 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 00:44:32.694718   18362 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 00:44:32.694802   18362 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 00:44:32.694954   18362 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-602145 localhost] and IPs [192.168.39.207 127.0.0.1 ::1]
	I1026 00:44:32.695025   18362 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 00:44:32.695173   18362 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-602145 localhost] and IPs [192.168.39.207 127.0.0.1 ::1]
	I1026 00:44:32.695264   18362 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 00:44:32.695365   18362 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 00:44:32.695432   18362 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 00:44:32.695513   18362 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 00:44:32.695586   18362 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 00:44:32.695665   18362 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 00:44:32.695760   18362 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 00:44:32.695819   18362 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 00:44:32.695866   18362 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 00:44:32.695948   18362 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 00:44:32.696047   18362 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 00:44:32.697241   18362 out.go:235]   - Booting up control plane ...
	I1026 00:44:32.697352   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 00:44:32.697466   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 00:44:32.697526   18362 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 00:44:32.697612   18362 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 00:44:32.697690   18362 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 00:44:32.697734   18362 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 00:44:32.697860   18362 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 00:44:32.697980   18362 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 00:44:32.698076   18362 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.91963ms
	I1026 00:44:32.698182   18362 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 00:44:32.698259   18362 kubeadm.go:310] [api-check] The API server is healthy after 5.501627653s
	I1026 00:44:32.698391   18362 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 00:44:32.698557   18362 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 00:44:32.698644   18362 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 00:44:32.698905   18362 kubeadm.go:310] [mark-control-plane] Marking the node addons-602145 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 00:44:32.698998   18362 kubeadm.go:310] [bootstrap-token] Using token: i9uyyo.fe8oo1yr6slh6qor
	I1026 00:44:32.700913   18362 out.go:235]   - Configuring RBAC rules ...
	I1026 00:44:32.701006   18362 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 00:44:32.701076   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 00:44:32.701207   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 00:44:32.701343   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 00:44:32.701524   18362 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 00:44:32.701633   18362 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 00:44:32.701781   18362 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 00:44:32.701850   18362 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 00:44:32.701896   18362 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 00:44:32.701902   18362 kubeadm.go:310] 
	I1026 00:44:32.701950   18362 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 00:44:32.701955   18362 kubeadm.go:310] 
	I1026 00:44:32.702044   18362 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 00:44:32.702053   18362 kubeadm.go:310] 
	I1026 00:44:32.702077   18362 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 00:44:32.702143   18362 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 00:44:32.702199   18362 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 00:44:32.702208   18362 kubeadm.go:310] 
	I1026 00:44:32.702258   18362 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 00:44:32.702264   18362 kubeadm.go:310] 
	I1026 00:44:32.702325   18362 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 00:44:32.702334   18362 kubeadm.go:310] 
	I1026 00:44:32.702379   18362 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 00:44:32.702449   18362 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 00:44:32.702519   18362 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 00:44:32.702527   18362 kubeadm.go:310] 
	I1026 00:44:32.702649   18362 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 00:44:32.702780   18362 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 00:44:32.702788   18362 kubeadm.go:310] 
	I1026 00:44:32.702897   18362 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9uyyo.fe8oo1yr6slh6qor \
	I1026 00:44:32.703034   18362 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 00:44:32.703059   18362 kubeadm.go:310] 	--control-plane 
	I1026 00:44:32.703064   18362 kubeadm.go:310] 
	I1026 00:44:32.703156   18362 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 00:44:32.703173   18362 kubeadm.go:310] 
	I1026 00:44:32.703301   18362 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9uyyo.fe8oo1yr6slh6qor \
	I1026 00:44:32.703442   18362 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 00:44:32.703458   18362 cni.go:84] Creating CNI manager for ""
	I1026 00:44:32.703470   18362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:44:32.705094   18362 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 00:44:32.706290   18362 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 00:44:32.718683   18362 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 00:44:32.736359   18362 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 00:44:32.736424   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:32.736425   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-602145 minikube.k8s.io/updated_at=2024_10_26T00_44_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=addons-602145 minikube.k8s.io/primary=true
	I1026 00:44:32.903007   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:32.903035   18362 ops.go:34] apiserver oom_adj: -16
	I1026 00:44:33.403464   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:33.903212   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:34.403819   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:34.903467   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:35.403131   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:35.903901   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:36.404022   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:36.904019   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:37.403782   18362 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 00:44:37.524160   18362 kubeadm.go:1113] duration metric: took 4.787795845s to wait for elevateKubeSystemPrivileges
	I1026 00:44:37.524192   18362 kubeadm.go:394] duration metric: took 14.461759067s to StartCluster
	I1026 00:44:37.524212   18362 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:37.524331   18362 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:44:37.524758   18362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:44:37.524984   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 00:44:37.524988   18362 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:44:37.525093   18362 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 00:44:37.525180   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:37.525218   18362 addons.go:69] Setting yakd=true in profile "addons-602145"
	I1026 00:44:37.525229   18362 addons.go:69] Setting gcp-auth=true in profile "addons-602145"
	I1026 00:44:37.525244   18362 addons.go:234] Setting addon yakd=true in "addons-602145"
	I1026 00:44:37.525255   18362 addons.go:69] Setting ingress-dns=true in profile "addons-602145"
	I1026 00:44:37.525258   18362 addons.go:69] Setting cloud-spanner=true in profile "addons-602145"
	I1026 00:44:37.525268   18362 addons.go:69] Setting storage-provisioner=true in profile "addons-602145"
	I1026 00:44:37.525276   18362 addons.go:234] Setting addon ingress-dns=true in "addons-602145"
	I1026 00:44:37.525280   18362 addons.go:234] Setting addon cloud-spanner=true in "addons-602145"
	I1026 00:44:37.525285   18362 addons.go:234] Setting addon storage-provisioner=true in "addons-602145"
	I1026 00:44:37.525279   18362 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-602145"
	I1026 00:44:37.525300   18362 addons.go:69] Setting volcano=true in profile "addons-602145"
	I1026 00:44:37.525310   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525316   18362 addons.go:69] Setting metrics-server=true in profile "addons-602145"
	I1026 00:44:37.525318   18362 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-602145"
	I1026 00:44:37.525325   18362 addons.go:69] Setting registry=true in profile "addons-602145"
	I1026 00:44:37.525328   18362 addons.go:234] Setting addon metrics-server=true in "addons-602145"
	I1026 00:44:37.525338   18362 addons.go:234] Setting addon registry=true in "addons-602145"
	I1026 00:44:37.525346   18362 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-602145"
	I1026 00:44:37.525238   18362 addons.go:69] Setting default-storageclass=true in profile "addons-602145"
	I1026 00:44:37.525363   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525378   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525349   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525290   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525311   18362 addons.go:234] Setting addon volcano=true in "addons-602145"
	I1026 00:44:37.525463   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525253   18362 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-602145"
	I1026 00:44:37.525520   18362 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-602145"
	I1026 00:44:37.525546   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525805   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525364   18362 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-602145"
	I1026 00:44:37.525818   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525844   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525885   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525896   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.525223   18362 addons.go:69] Setting inspektor-gadget=true in profile "addons-602145"
	I1026 00:44:37.525909   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525916   18362 addons.go:234] Setting addon inspektor-gadget=true in "addons-602145"
	I1026 00:44:37.525925   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525936   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.526125   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526141   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526151   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526169   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525807   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526253   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526260   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526276   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525311   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525311   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525296   18362 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-602145"
	I1026 00:44:37.526538   18362 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-602145"
	I1026 00:44:37.526649   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526687   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526852   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526874   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.526879   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.526892   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.527892   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.532618   18362 out.go:177] * Verifying Kubernetes components...
	I1026 00:44:37.525317   18362 addons.go:69] Setting volumesnapshots=true in profile "addons-602145"
	I1026 00:44:37.525330   18362 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-602145"
	I1026 00:44:37.533251   18362 addons.go:234] Setting addon volumesnapshots=true in "addons-602145"
	I1026 00:44:37.533287   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.525251   18362 mustload.go:65] Loading cluster: addons-602145
	I1026 00:44:37.533490   18362 config.go:182] Loaded profile config "addons-602145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:44:37.533857   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.533880   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.533890   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.533927   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.525805   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.534417   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.537530   18362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 00:44:37.525255   18362 addons.go:69] Setting ingress=true in profile "addons-602145"
	I1026 00:44:37.537650   18362 addons.go:234] Setting addon ingress=true in "addons-602145"
	I1026 00:44:37.537710   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.533285   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.547335   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I1026 00:44:37.547760   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I1026 00:44:37.548321   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.548847   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.548872   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.549047   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I1026 00:44:37.549332   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.549440   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.550026   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.550056   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.550065   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.550406   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.551725   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.551767   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.554099   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.554123   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.554313   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.554346   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.556482   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.556571   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I1026 00:44:37.556651   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I1026 00:44:37.556764   18362 addons.go:234] Setting addon default-storageclass=true in "addons-602145"
	I1026 00:44:37.556806   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.557155   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.557179   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.557357   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.557589   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.557603   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.557811   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.557822   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.558087   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.558101   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.558358   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I1026 00:44:37.558606   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.558635   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.558670   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.558713   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.558898   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.558957   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.559008   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.559426   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.559467   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.560165   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.560183   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.560589   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.560622   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.568211   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.568835   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.568864   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.572891   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I1026 00:44:37.581662   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1026 00:44:37.582774   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.583113   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I1026 00:44:37.583374   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.583401   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.583629   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.583741   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.583763   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I1026 00:44:37.584228   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.584248   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.584259   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.584277   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.584311   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.584740   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.584758   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.584813   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.584972   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.586257   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.586547   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I1026 00:44:37.587042   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.587092   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.587404   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.587414   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.587775   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.587817   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.588258   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.589032   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 00:44:37.589106   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I1026 00:44:37.590011   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.590079   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.590247   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.590287   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.590516   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
	I1026 00:44:37.590592   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.590613   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.590768   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.590811   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.591098   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.591216   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.591321   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.591489   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.591644   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.591683   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.592315   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.592331   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.592526   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 00:44:37.592652   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.593154   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.593188   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.594957   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.595201   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 00:44:37.595422   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.595494   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.597742   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1026 00:44:37.598090   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I1026 00:44:37.598120   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 00:44:37.598255   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.598823   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.598841   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.599208   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.599398   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.599738   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.600302   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.600319   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.600710   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.601050   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 00:44:37.601234   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.601273   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.601282   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.603097   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 00:44:37.603104   18362 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 00:44:37.604138   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I1026 00:44:37.604637   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.604787   18362 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 00:44:37.604809   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 00:44:37.604828   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.605221   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.605238   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.605627   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.605980   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 00:44:37.606513   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.606556   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.606752   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1026 00:44:37.607081   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.607505   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.607522   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.607825   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.608323   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.608359   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.608572   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.609452   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.609480   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.609528   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 00:44:37.609934   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.610103   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.610214   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.610310   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.612043   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 00:44:37.612061   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 00:44:37.612079   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.615678   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.616067   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.616091   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.616262   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.616423   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.616535   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.616631   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.625452   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1026 00:44:37.626195   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.626765   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.626790   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.627406   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.627967   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.628014   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.633598   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I1026 00:44:37.634099   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.634662   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.634680   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.635066   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.635244   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.635661   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I1026 00:44:37.636102   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.636587   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.636606   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.636958   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.637105   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.637273   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.638928   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.639062   18362 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 00:44:37.640606   18362 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1026 00:44:37.640607   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 00:44:37.640678   18362 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 00:44:37.640697   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.642013   18362 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:44:37.642029   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 00:44:37.642135   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.643012   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1026 00:44:37.643839   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.644182   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I1026 00:44:37.644277   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.644290   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.644641   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.644836   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.644993   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.645317   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.645884   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.645901   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.645973   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I1026 00:44:37.645982   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.645996   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.646226   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.646286   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.646448   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.646784   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.646804   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.646820   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.647010   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.647033   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.647054   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.647094   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.647265   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.647399   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.647417   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.647455   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.647499   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.647840   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.647852   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.647989   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.649334   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.649402   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I1026 00:44:37.649950   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.650309   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.651140   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.651157   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.651174   18362 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1026 00:44:37.651600   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.651610   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.651801   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.651874   18362 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1026 00:44:37.653372   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 00:44:37.653390   18362 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 00:44:37.653402   18362 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1026 00:44:37.653409   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.653565   18362 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:44:37.653581   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1026 00:44:37.653596   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.654762   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I1026 00:44:37.655150   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.656152   18362 out.go:177]   - Using image docker.io/registry:2.8.3
	I1026 00:44:37.656419   18362 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-602145"
	I1026 00:44:37.656457   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:37.656827   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.656858   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.657505   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.657522   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.657574   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.657745   18362 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 00:44:37.657765   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 00:44:37.657782   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.657948   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.658288   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.659213   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.660542   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.661001   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661078   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661344   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.661355   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661362   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661755   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.661927   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.661943   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.661960   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.661974   18362 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 00:44:37.662117   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.662151   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.662309   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.662318   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I1026 00:44:37.662360   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.662479   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.662821   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.662824   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.663827   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.663914   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.663941   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.664091   18362 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:44:37.664108   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 00:44:37.664123   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.664127   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.664271   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.664389   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.664672   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.665026   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.665243   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I1026 00:44:37.665590   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.666004   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.666022   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.666356   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.666543   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.668518   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I1026 00:44:37.668836   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I1026 00:44:37.669078   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.669160   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.669467   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 00:44:37.669711   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.669715   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.669733   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.669746   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.670093   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.670165   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.670392   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.670757   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.670773   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.671106   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.671227   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.671436   18362 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 00:44:37.671450   18362 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 00:44:37.671463   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.671477   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.671516   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I1026 00:44:37.672099   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.672168   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.672256   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.672503   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.672596   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:37.672622   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:37.674584   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.674594   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.674626   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:37.674633   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:37.674641   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:37.674642   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:37.674647   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:37.674727   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.674894   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.674907   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.674979   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:37.674989   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:37.675000   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	W1026 00:44:37.675082   18362 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 00:44:37.675302   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.676501   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.676531   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.676681   18362 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1026 00:44:37.676823   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.676689   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1026 00:44:37.676839   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.677070   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.677248   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.677267   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.677282   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.677366   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.677527   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.677532   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.677693   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.677818   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.677928   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.678261   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.678285   18362 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 00:44:37.678299   18362 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1026 00:44:37.678314   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.679192   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.679607   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:37.679685   18362 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 00:44:37.680609   18362 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1026 00:44:37.681568   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 00:44:37.681586   18362 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 00:44:37.681603   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.682164   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.682378   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:37.682477   18362 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1026 00:44:37.682494   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 00:44:37.682510   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.682563   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.682580   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.683107   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.683284   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.683467   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.683616   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.683969   18362 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:44:37.683988   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 00:44:37.684006   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.686149   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I1026 00:44:37.686760   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.686861   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I1026 00:44:37.687545   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.687553   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.687661   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.687672   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.688039   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.688041   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.688086   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.688093   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.688239   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.688766   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.688850   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.688863   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.688915   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.689166   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.689188   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.689226   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.689525   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.689700   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:37.689730   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:37.689792   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.689810   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.689837   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.689994   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690140   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.690148   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.690183   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.690259   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.690401   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690409   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.690525   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:37.690581   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	W1026 00:44:37.701041   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44234->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.701085   18362 retry.go:31] will retry after 317.954236ms: ssh: handshake failed: read tcp 192.168.39.1:44234->192.168.39.207:22: read: connection reset by peer
	W1026 00:44:37.701165   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44246->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.701182   18362 retry.go:31] will retry after 242.443302ms: ssh: handshake failed: read tcp 192.168.39.1:44246->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.707347   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I1026 00:44:37.707712   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:37.708064   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:37.708080   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:37.708342   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:37.708455   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:37.710059   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:37.711647   18362 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 00:44:37.712855   18362 out.go:177]   - Using image docker.io/busybox:stable
	I1026 00:44:37.714048   18362 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:44:37.714066   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 00:44:37.714084   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:37.717234   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.717703   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:37.717722   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:37.717884   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:37.718033   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:37.718181   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:37.718263   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	W1026 00:44:37.718839   18362 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44264->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.718858   18362 retry.go:31] will retry after 245.014763ms: ssh: handshake failed: read tcp 192.168.39.1:44264->192.168.39.207:22: read: connection reset by peer
	I1026 00:44:37.928032   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 00:44:37.928057   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 00:44:38.040676   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 00:44:38.040702   18362 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 00:44:38.044989   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 00:44:38.045018   18362 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 00:44:38.132483   18362 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 00:44:38.132510   18362 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 00:44:38.158001   18362 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 00:44:38.158030   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1026 00:44:38.215125   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 00:44:38.232449   18362 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:44:38.232477   18362 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 00:44:38.233017   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 00:44:38.238582   18362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 00:44:38.238769   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 00:44:38.250194   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 00:44:38.250213   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 00:44:38.269943   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 00:44:38.269972   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 00:44:38.290982   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 00:44:38.291014   18362 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 00:44:38.297019   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 00:44:38.305877   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 00:44:38.330805   18362 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:44:38.330825   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 00:44:38.353435   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 00:44:38.355323   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 00:44:38.424194   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 00:44:38.443435   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 00:44:38.456588   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 00:44:38.471090   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 00:44:38.471112   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 00:44:38.506967   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 00:44:38.506990   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 00:44:38.530279   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 00:44:38.530311   18362 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 00:44:38.535380   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 00:44:38.610025   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 00:44:38.741707   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 00:44:38.741731   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 00:44:38.747341   18362 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 00:44:38.747360   18362 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 00:44:38.756637   18362 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 00:44:38.756657   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 00:44:38.863447   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 00:44:38.863473   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 00:44:38.873943   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 00:44:38.873967   18362 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 00:44:38.911896   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 00:44:39.105510   18362 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:39.105534   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 00:44:39.160191   18362 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 00:44:39.160225   18362 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 00:44:39.488417   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 00:44:39.488441   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 00:44:39.523799   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:39.711790   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 00:44:39.711825   18362 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 00:44:39.814961   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.599795174s)
	I1026 00:44:39.815007   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:39.815019   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:39.815315   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:39.815334   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:39.815361   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:39.815376   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:39.815384   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:39.815766   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:39.815778   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:39.815783   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:39.925289   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 00:44:39.925313   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 00:44:40.290359   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 00:44:40.290385   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 00:44:40.631685   18362 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:44:40.631740   18362 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 00:44:40.954205   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 00:44:42.716580   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.483528072s)
	I1026 00:44:42.716612   18362 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.478000293s)
	I1026 00:44:42.716635   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716664   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.716721   18362 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.477905606s)
	I1026 00:44:42.716756   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.419714819s)
	I1026 00:44:42.716753   18362 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 00:44:42.716775   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716784   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.716846   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.410946596s)
	I1026 00:44:42.716866   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.716874   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717155   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717163   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717178   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717183   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717192   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717200   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717207   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717214   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717221   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717228   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717272   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717308   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717317   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717330   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.717336   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.717449   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717474   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717484   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717500   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717507   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.717598   18362 node_ready.go:35] waiting up to 6m0s for node "addons-602145" to be "Ready" ...
	I1026 00:44:42.717686   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.717716   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.717726   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.752808   18362 node_ready.go:49] node "addons-602145" has status "Ready":"True"
	I1026 00:44:42.752829   18362 node_ready.go:38] duration metric: took 35.207505ms for node "addons-602145" to be "Ready" ...
	I1026 00:44:42.752838   18362 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:44:42.823086   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:42.823107   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:42.823345   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:42.823367   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:42.823392   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:42.836076   18362 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:43.253149   18362 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-602145" context rescaled to 1 replicas
	I1026 00:44:43.431917   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.078445578s)
	I1026 00:44:43.431970   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.431976   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.076624651s)
	I1026 00:44:43.432021   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.007795594s)
	I1026 00:44:43.432025   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432108   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432115   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.988649739s)
	I1026 00:44:43.432138   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432150   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.431987   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432056   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432200   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432254   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.975635606s)
	I1026 00:44:43.432318   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432309   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.896905927s)
	I1026 00:44:43.432361   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432377   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432338   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432597   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432630   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432645   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432655   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432663   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432670   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432736   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432751   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432761   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432772   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432795   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432825   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432833   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.432840   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.432913   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432942   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.432965   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.432971   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.432980   18362 addons.go:475] Verifying addon metrics-server=true in "addons-602145"
	I1026 00:44:43.433017   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433034   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433030   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.433044   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433056   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433108   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433119   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433126   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433132   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433174   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433181   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433189   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.433195   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.433485   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.433516   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433527   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433589   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.433604   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.433613   18362 addons.go:475] Verifying addon registry=true in "addons-602145"
	I1026 00:44:43.434342   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.434374   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.434381   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.434605   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.434638   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.434645   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.435854   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.435894   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.435901   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.436531   18362 out.go:177] * Verifying registry addon...
	I1026 00:44:43.438465   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 00:44:43.504446   18362 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 00:44:43.504480   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:43.533385   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:43.533411   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:43.533674   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:43.533695   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:43.533681   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:43.949853   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.469071   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.731892   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 00:44:44.731935   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:44.734738   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:44.735129   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:44.735158   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:44.735356   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:44.735538   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:44.735678   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:44.735812   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:44.895478   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:44.975559   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:44.984750   18362 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 00:44:45.019449   18362 addons.go:234] Setting addon gcp-auth=true in "addons-602145"
	I1026 00:44:45.019505   18362 host.go:66] Checking if "addons-602145" exists ...
	I1026 00:44:45.019903   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:45.019950   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:45.034890   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I1026 00:44:45.035347   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:45.035830   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:45.035850   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:45.036171   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:45.036611   18362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:44:45.036664   18362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:44:45.051378   18362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I1026 00:44:45.051875   18362 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:44:45.052365   18362 main.go:141] libmachine: Using API Version  1
	I1026 00:44:45.052398   18362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:44:45.052786   18362 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:44:45.053001   18362 main.go:141] libmachine: (addons-602145) Calling .GetState
	I1026 00:44:45.054512   18362 main.go:141] libmachine: (addons-602145) Calling .DriverName
	I1026 00:44:45.054755   18362 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 00:44:45.054783   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHHostname
	I1026 00:44:45.057144   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:45.057472   18362 main.go:141] libmachine: (addons-602145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:12:e0", ip: ""} in network mk-addons-602145: {Iface:virbr1 ExpiryTime:2024-10-26 01:44:10 +0000 UTC Type:0 Mac:52:54:00:c1:12:e0 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:addons-602145 Clientid:01:52:54:00:c1:12:e0}
	I1026 00:44:45.057500   18362 main.go:141] libmachine: (addons-602145) DBG | domain addons-602145 has defined IP address 192.168.39.207 and MAC address 52:54:00:c1:12:e0 in network mk-addons-602145
	I1026 00:44:45.057639   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHPort
	I1026 00:44:45.057807   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHKeyPath
	I1026 00:44:45.057966   18362 main.go:141] libmachine: (addons-602145) Calling .GetSSHUsername
	I1026 00:44:45.058136   18362 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/addons-602145/id_rsa Username:docker}
	I1026 00:44:45.446069   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:45.693687   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.083625407s)
	I1026 00:44:45.693731   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.781796676s)
	I1026 00:44:45.693766   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.693784   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.693738   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.693837   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.693844   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.170009353s)
	W1026 00:44:45.693885   18362 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:44:45.693908   18362 retry.go:31] will retry after 342.657784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 00:44:45.694030   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694047   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.694056   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.694070   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.694237   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:45.694243   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694262   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.694271   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:45.694282   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:45.694326   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.694350   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.695532   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:45.695549   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:45.695560   18362 addons.go:475] Verifying addon ingress=true in "addons-602145"
	I1026 00:44:45.695758   18362 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-602145 service yakd-dashboard -n yakd-dashboard
	
	I1026 00:44:45.696906   18362 out.go:177] * Verifying ingress addon...
	I1026 00:44:45.699303   18362 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 00:44:45.724699   18362 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 00:44:45.724724   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:45.942829   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.037746   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 00:44:46.204124   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:46.463536   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.723036   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:46.735975   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.781710044s)
	I1026 00:44:46.736024   18362 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.681245868s)
	I1026 00:44:46.736027   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:46.736181   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:46.736527   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:46.736549   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:46.736557   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:46.736564   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:46.736571   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:46.736774   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:46.736806   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:46.736819   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:46.736829   18362 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-602145"
	I1026 00:44:46.737666   18362 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1026 00:44:46.738662   18362 out.go:177] * Verifying csi-hostpath-driver addon...
	I1026 00:44:46.740355   18362 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 00:44:46.741014   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 00:44:46.741807   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 00:44:46.741823   18362 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 00:44:46.753149   18362 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 00:44:46.753169   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:46.882573   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 00:44:46.882599   18362 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 00:44:46.943810   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:46.965144   18362 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:44:46.965163   18362 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 00:44:47.047486   18362 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 00:44:47.205884   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:47.549679   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:47.550319   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:47.557275   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:47.703212   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:47.805956   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:47.957870   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:48.204067   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:48.245098   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:48.361982   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.324185805s)
	I1026 00:44:48.362033   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362088   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362098   18362 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.314567887s)
	I1026 00:44:48.362138   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362155   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362360   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362375   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.362383   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362391   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362495   18362 main.go:141] libmachine: (addons-602145) DBG | Closing plugin on server side
	I1026 00:44:48.362507   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362551   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.362564   18362 main.go:141] libmachine: Making call to close driver server
	I1026 00:44:48.362572   18362 main.go:141] libmachine: (addons-602145) Calling .Close
	I1026 00:44:48.362592   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.362606   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.364153   18362 main.go:141] libmachine: Successfully made call to close driver server
	I1026 00:44:48.364180   18362 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 00:44:48.365500   18362 addons.go:475] Verifying addon gcp-auth=true in "addons-602145"
	I1026 00:44:48.367541   18362 out.go:177] * Verifying gcp-auth addon...
	I1026 00:44:48.369997   18362 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 00:44:48.372722   18362 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 00:44:48.372736   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:48.441724   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:48.703279   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:48.744887   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:48.873772   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:48.945509   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:49.205267   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:49.246195   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:49.375470   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:49.443789   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:49.703625   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:49.747021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:49.841644   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:49.873239   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:49.942516   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:50.203094   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:50.245485   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:50.373329   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:50.442392   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:50.704477   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:50.746397   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:50.873647   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:50.943332   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:51.205843   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:51.362986   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:51.519995   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:51.520530   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:51.705334   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:51.745974   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:51.842552   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:51.873122   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:51.942249   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:52.203955   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:52.246482   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:52.373399   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:52.442289   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:52.703594   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:52.746304   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:52.873837   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:52.942797   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:53.205474   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:53.306800   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:53.405814   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:53.443231   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:53.704811   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:53.746870   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:53.842727   18362 pod_ready.go:103] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:53.873246   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:53.942900   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:54.203283   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:54.245640   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:54.373845   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:54.441683   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:54.702911   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:54.745894   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:54.874021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:54.944100   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:55.205119   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:55.244943   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:55.373657   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:55.442641   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:55.703833   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:55.745846   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:55.877002   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:55.975134   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:56.204207   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:56.246187   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:56.342170   18362 pod_ready.go:93] pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.342196   18362 pod_ready.go:82] duration metric: took 13.506093961s for pod "amd-gpu-device-plugin-j7hfs" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.342207   18362 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.343926   18362 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-27zzz" not found
	I1026 00:44:56.343943   18362 pod_ready.go:82] duration metric: took 1.730601ms for pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace to be "Ready" ...
	E1026 00:44:56.343951   18362 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-27zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-27zzz" not found
	I1026 00:44:56.343958   18362 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.348350   18362 pod_ready.go:93] pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.348367   18362 pod_ready.go:82] duration metric: took 4.403788ms for pod "coredns-7c65d6cfc9-rg759" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.348378   18362 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.352322   18362 pod_ready.go:93] pod "etcd-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.352339   18362 pod_ready.go:82] duration metric: took 3.953676ms for pod "etcd-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.352346   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.356524   18362 pod_ready.go:93] pod "kube-apiserver-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.356544   18362 pod_ready.go:82] duration metric: took 4.190127ms for pod "kube-apiserver-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.356554   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.372514   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:56.443587   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:56.540214   18362 pod_ready.go:93] pod "kube-controller-manager-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.540242   18362 pod_ready.go:82] duration metric: took 183.679309ms for pod "kube-controller-manager-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.540256   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zmp9p" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.719402   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:56.744353   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:56.873914   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:56.941651   18362 pod_ready.go:93] pod "kube-proxy-zmp9p" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:56.941679   18362 pod_ready.go:82] duration metric: took 401.416415ms for pod "kube-proxy-zmp9p" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.941691   18362 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:56.942034   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:57.205326   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:57.245298   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:57.340147   18362 pod_ready.go:93] pod "kube-scheduler-addons-602145" in "kube-system" namespace has status "Ready":"True"
	I1026 00:44:57.340172   18362 pod_ready.go:82] duration metric: took 398.474577ms for pod "kube-scheduler-addons-602145" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:57.340182   18362 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace to be "Ready" ...
	I1026 00:44:57.374156   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:57.442078   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:57.703414   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:57.744761   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:57.873438   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:57.943106   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:58.203321   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:58.245332   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:58.373612   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:58.442763   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:58.704102   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:58.745661   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:58.872933   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:58.942251   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:59.203926   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:59.245830   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:59.346756   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:44:59.373514   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:59.442649   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:44:59.704361   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:44:59.805106   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:44:59.872826   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:44:59.942023   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:00.202986   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:00.245136   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:00.373390   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:00.442345   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:00.708857   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:00.745797   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:00.874326   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:00.942406   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:01.203852   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:01.523881   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:01.524414   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:01.524879   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:01.781726   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:01.783834   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:01.784227   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:01.882559   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:01.943287   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:02.209786   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:02.246413   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:02.373542   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:02.443290   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:02.703820   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:02.745281   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:02.874104   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:02.942494   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:03.206266   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:03.245887   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:03.373668   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:03.443101   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:03.703504   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:03.746877   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:03.846919   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:03.874285   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:03.942660   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:04.203304   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:04.245799   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:04.373094   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:04.442378   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:04.704115   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:04.745698   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:04.873998   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:04.942566   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:05.204699   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:05.244827   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:05.375213   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:05.442746   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:05.706556   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:05.746964   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:05.849014   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:05.874917   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:05.941951   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:06.203121   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:06.244747   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:06.372654   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:06.443598   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:06.703436   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:06.749167   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:06.874367   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:06.942724   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:07.202754   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:07.245540   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:07.374815   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:07.441652   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:07.703513   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:07.745931   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:07.874202   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:07.942630   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:08.204363   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:08.245617   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:08.346158   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:08.374115   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:08.441895   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:08.708945   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:08.745960   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:08.874476   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:08.942871   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:09.204122   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:09.245298   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:09.373372   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:09.443443   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:09.704682   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:09.744928   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:09.874968   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:09.975875   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:10.203299   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:10.245007   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:10.374254   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:10.442084   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:10.703910   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:10.745834   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:10.847059   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:10.873791   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:10.941681   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:11.203677   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:11.517728   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:11.518085   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:11.520655   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:11.703377   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:11.745588   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:11.873978   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:11.942034   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:12.203617   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:12.246445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:12.375533   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:12.476158   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:12.703613   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:12.745773   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:12.851302   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:12.873251   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:12.942460   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:13.203975   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:13.245184   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:13.373276   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:13.445082   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:13.703701   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:13.744735   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:13.874243   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:13.942071   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:14.203475   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:14.245882   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:14.373800   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:14.441539   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:14.702810   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:14.745864   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:14.873647   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:14.942798   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:15.203935   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:15.245657   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:15.346178   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:15.373082   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:15.442334   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:15.703410   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:15.745174   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:15.874384   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:15.942177   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:16.204126   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:16.245428   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:16.380021   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:16.442424   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:16.704115   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:16.745604   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:16.873445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:16.942687   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:17.203770   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:17.245539   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:17.346253   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:17.374393   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:17.443133   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:17.704708   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:17.746018   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:17.873991   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:17.941872   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:18.204076   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:18.244938   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:18.374537   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:18.443107   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:18.703385   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:18.745999   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:18.873760   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:18.942473   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:19.204188   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:19.245672   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:19.346495   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:19.373531   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:19.442934   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:19.703943   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:19.745986   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:19.874432   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:19.975215   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:20.204299   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:20.245866   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:20.373534   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:20.442584   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 00:45:20.703845   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:20.745768   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:20.873632   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:20.942856   18362 kapi.go:107] duration metric: took 37.504386247s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 00:45:21.203691   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:21.718316   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:21.719992   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:21.721813   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:21.727889   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:21.746266   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:21.876386   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:22.203397   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:22.246100   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:22.374307   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:22.703433   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:22.751135   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:22.874402   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:23.203594   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:23.246197   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:23.373642   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:23.703199   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:23.745499   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:23.845679   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:23.873091   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:24.204350   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:24.245855   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:24.373952   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:24.708382   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:24.746052   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:24.873844   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:25.203312   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:25.245246   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:25.373404   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:25.703887   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:25.745384   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:25.846166   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:25.873728   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:26.204992   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:26.246774   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:26.373153   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.034642   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.034774   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.035772   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.208103   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.310868   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.374097   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:27.705208   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:27.746489   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:27.846304   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:27.873446   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:28.203266   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:28.245142   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:28.373287   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:28.703102   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:28.746115   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:28.882349   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:29.202937   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:29.245150   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:29.373432   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:29.703219   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:29.745355   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:29.874163   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:30.204270   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:30.245812   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:30.346348   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:30.372767   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:30.703687   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:30.758149   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:30.876840   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:31.203626   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:31.246555   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:31.373264   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:31.704643   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:31.744999   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:31.872829   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:32.213626   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:32.245754   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:32.347297   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:32.375821   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:32.703358   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:32.745589   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:32.873685   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:33.685837   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:33.689347   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:33.689544   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:33.788307   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:33.788838   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:33.882954   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:34.203196   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:34.245612   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:34.375063   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:34.705246   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:34.745890   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:34.851682   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:34.879350   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:35.204282   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:35.247070   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:35.373536   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:35.705310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:35.746615   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:35.874691   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:36.205556   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:36.306953   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:36.373541   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:36.704426   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:36.747970   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:36.873760   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:37.203880   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:37.244788   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:37.347153   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:37.375129   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:37.704319   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:37.746360   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:37.873600   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:38.203914   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:38.304512   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:38.373711   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:38.703310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:38.745662   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:38.873875   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:39.204326   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:39.305799   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:39.348170   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:39.375024   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:39.704110   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:39.745976   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:39.873391   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:40.203434   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:40.246393   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:40.374693   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:40.704215   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:40.746495   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:40.873587   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:41.203662   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:41.249521   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:41.374419   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:41.704331   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:41.745312   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:41.846428   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:41.873302   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:42.204261   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:42.309990   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:42.405253   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:42.703723   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:42.745313   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:42.873147   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:43.204558   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:43.249371   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:43.373507   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:43.703540   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:43.770832   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:43.847333   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:43.873632   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:44.203399   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:44.251147   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:44.373454   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:44.708453   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:44.747356   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:44.873558   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:45.203629   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:45.245567   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:45.372905   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:45.706111   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:45.752737   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:45.875564   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:46.206010   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:46.246093   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:46.591522   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:46.594740   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:46.704200   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:46.745563   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:46.873700   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:47.207585   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:47.245372   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:47.396791   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:47.705237   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:47.746261   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:47.873591   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:48.203607   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:48.245750   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:48.373572   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:48.703237   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:48.746179   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:48.846385   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:48.872741   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:49.203417   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:49.245638   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:49.374087   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:49.706144   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:49.746583   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:49.874153   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:50.204285   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:50.245595   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:50.373859   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:50.704066   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:50.745445   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:50.847331   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:50.874005   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:51.204304   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:51.304987   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:51.374293   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:51.704508   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:51.745943   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:51.873702   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.203338   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:52.245748   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:52.373185   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.946854   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:52.946911   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 00:45:52.946989   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:52.948333   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.212300   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.245376   18362 kapi.go:107] duration metric: took 1m6.504358421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 00:45:53.373235   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:53.704173   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:53.873751   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:54.204310   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:54.374257   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:54.704159   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:54.873323   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:55.204425   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:55.345758   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:55.373543   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:55.703801   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:55.873292   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:56.203933   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:56.373571   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:56.703678   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:56.872775   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:57.203929   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:57.351056   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:57.375266   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:57.704608   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:57.874394   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:58.203938   18362 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 00:45:58.373398   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:58.703241   18362 kapi.go:107] duration metric: took 1m13.003934885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 00:45:58.873818   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:59.374290   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:45:59.845661   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:45:59.873380   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:00.373811   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.220661   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.373704   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:01.846338   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:01.873373   18362 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 00:46:02.373762   18362 kapi.go:107] duration metric: took 1m14.003763064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 00:46:02.375526   18362 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-602145 cluster.
	I1026 00:46:02.377048   18362 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 00:46:02.378493   18362 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 00:46:02.379964   18362 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, ingress-dns, cloud-spanner, storage-provisioner-rancher, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 00:46:02.381214   18362 addons.go:510] duration metric: took 1m24.856119786s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner default-storageclass metrics-server inspektor-gadget ingress-dns cloud-spanner storage-provisioner-rancher yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 00:46:03.846374   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:05.846858   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:08.346208   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:10.846564   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:13.345548   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:15.345909   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:17.346689   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:19.346733   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:21.847027   18362 pod_ready.go:103] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"False"
	I1026 00:46:23.347691   18362 pod_ready.go:93] pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace has status "Ready":"True"
	I1026 00:46:23.347715   18362 pod_ready.go:82] duration metric: took 1m26.007527804s for pod "metrics-server-84c5f94fbc-h4pf5" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.347725   18362 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.352273   18362 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace has status "Ready":"True"
	I1026 00:46:23.352295   18362 pod_ready.go:82] duration metric: took 4.562869ms for pod "nvidia-device-plugin-daemonset-njbmm" in "kube-system" namespace to be "Ready" ...
	I1026 00:46:23.352309   18362 pod_ready.go:39] duration metric: took 1m40.599451217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 00:46:23.352326   18362 api_server.go:52] waiting for apiserver process to appear ...
	I1026 00:46:23.352352   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:23.352399   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:23.405865   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:23.405887   18362 cri.go:89] found id: ""
	I1026 00:46:23.405896   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:23.405946   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.410011   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:23.410063   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:23.451730   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:23.451750   18362 cri.go:89] found id: ""
	I1026 00:46:23.451757   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:23.451801   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.455402   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:23.455464   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:23.495737   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:23.495768   18362 cri.go:89] found id: ""
	I1026 00:46:23.495778   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:23.495836   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.502808   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:23.502875   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:23.537833   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:23.537864   18362 cri.go:89] found id: ""
	I1026 00:46:23.537873   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:23.537931   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.541944   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:23.542029   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:23.579051   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:23.579072   18362 cri.go:89] found id: ""
	I1026 00:46:23.579080   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:23.579124   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.582814   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:23.582889   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:23.617879   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:23.617906   18362 cri.go:89] found id: ""
	I1026 00:46:23.617914   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:23.617958   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:23.622279   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:23.622349   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:23.658673   18362 cri.go:89] found id: ""
	I1026 00:46:23.658705   18362 logs.go:282] 0 containers: []
	W1026 00:46:23.658716   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:23.658727   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:23.658741   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:24.761490   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:24.761541   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:24.886303   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:24.886335   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:24.934529   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:24.934563   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:25.000977   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:25.001012   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:25.038835   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:25.038864   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:25.074707   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:25.074732   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:25.133369   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:25.133427   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:25.215955   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:25.215988   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:25.230560   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:25.230597   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:25.271642   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:25.271669   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:27.821293   18362 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:46:27.841076   18362 api_server.go:72] duration metric: took 1m50.316047956s to wait for apiserver process to appear ...
	I1026 00:46:27.841105   18362 api_server.go:88] waiting for apiserver healthz status ...
	I1026 00:46:27.841135   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:27.841177   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:27.879218   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:27.879251   18362 cri.go:89] found id: ""
	I1026 00:46:27.879261   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:27.879319   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.884135   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:27.884197   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:27.919716   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:27.919740   18362 cri.go:89] found id: ""
	I1026 00:46:27.919747   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:27.919792   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.923742   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:27.923805   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:27.963665   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:27.963690   18362 cri.go:89] found id: ""
	I1026 00:46:27.963699   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:27.963751   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:27.967426   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:27.967480   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:28.004026   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:28.004055   18362 cri.go:89] found id: ""
	I1026 00:46:28.004064   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:28.004111   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.011483   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:28.011563   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:28.054008   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:28.054027   18362 cri.go:89] found id: ""
	I1026 00:46:28.054036   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:28.054089   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.058073   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:28.058117   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:28.094426   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:28.094448   18362 cri.go:89] found id: ""
	I1026 00:46:28.094459   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:28.094503   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:28.098143   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:28.098201   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:28.142839   18362 cri.go:89] found id: ""
	I1026 00:46:28.142858   18362 logs.go:282] 0 containers: []
	W1026 00:46:28.142865   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:28.142872   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:28.142883   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:28.178602   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:28.178637   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:28.225911   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:28.225944   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:28.239815   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:28.239842   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:28.345958   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:28.345987   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:28.394436   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:28.394478   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:28.451960   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:28.451993   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:29.467534   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:29.467575   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:29.534713   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:29.534750   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:29.621723   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:29.621765   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:29.687733   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:29.687764   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:32.227685   18362 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I1026 00:46:32.231993   18362 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I1026 00:46:32.233049   18362 api_server.go:141] control plane version: v1.31.2
	I1026 00:46:32.233072   18362 api_server.go:131] duration metric: took 4.391960342s to wait for apiserver health ...
	I1026 00:46:32.233079   18362 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 00:46:32.233095   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 00:46:32.233135   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 00:46:32.282289   18362 cri.go:89] found id: "89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:32.282312   18362 cri.go:89] found id: ""
	I1026 00:46:32.282319   18362 logs.go:282] 1 containers: [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1]
	I1026 00:46:32.282362   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.296702   18362 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 00:46:32.296786   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 00:46:32.363658   18362 cri.go:89] found id: "39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:32.363686   18362 cri.go:89] found id: ""
	I1026 00:46:32.363693   18362 logs.go:282] 1 containers: [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e]
	I1026 00:46:32.363739   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.368536   18362 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 00:46:32.368608   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 00:46:32.417056   18362 cri.go:89] found id: "5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:32.417080   18362 cri.go:89] found id: ""
	I1026 00:46:32.417087   18362 logs.go:282] 1 containers: [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d]
	I1026 00:46:32.417134   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.420943   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 00:46:32.421003   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 00:46:32.463944   18362 cri.go:89] found id: "6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:32.463970   18362 cri.go:89] found id: ""
	I1026 00:46:32.463978   18362 logs.go:282] 1 containers: [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098]
	I1026 00:46:32.464022   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.468021   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 00:46:32.468085   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 00:46:32.522711   18362 cri.go:89] found id: "bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:32.522736   18362 cri.go:89] found id: ""
	I1026 00:46:32.522746   18362 logs.go:282] 1 containers: [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354]
	I1026 00:46:32.522803   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.526962   18362 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 00:46:32.527038   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 00:46:32.563485   18362 cri.go:89] found id: "b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:32.563509   18362 cri.go:89] found id: ""
	I1026 00:46:32.563518   18362 logs.go:282] 1 containers: [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f]
	I1026 00:46:32.563563   18362 ssh_runner.go:195] Run: which crictl
	I1026 00:46:32.567368   18362 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 00:46:32.567424   18362 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 00:46:32.610034   18362 cri.go:89] found id: ""
	I1026 00:46:32.610059   18362 logs.go:282] 0 containers: []
	W1026 00:46:32.610067   18362 logs.go:284] No container was found matching "kindnet"
	I1026 00:46:32.610075   18362 logs.go:123] Gathering logs for kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] ...
	I1026 00:46:32.610085   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1"
	I1026 00:46:32.664057   18362 logs.go:123] Gathering logs for etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] ...
	I1026 00:46:32.664085   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e"
	I1026 00:46:32.740212   18362 logs.go:123] Gathering logs for kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] ...
	I1026 00:46:32.740246   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098"
	I1026 00:46:32.788784   18362 logs.go:123] Gathering logs for container status ...
	I1026 00:46:32.788816   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 00:46:32.841082   18362 logs.go:123] Gathering logs for kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] ...
	I1026 00:46:32.841112   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f"
	I1026 00:46:32.899056   18362 logs.go:123] Gathering logs for CRI-O ...
	I1026 00:46:32.899092   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 00:46:33.764981   18362 logs.go:123] Gathering logs for kubelet ...
	I1026 00:46:33.765029   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 00:46:33.850852   18362 logs.go:123] Gathering logs for dmesg ...
	I1026 00:46:33.850893   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 00:46:33.865960   18362 logs.go:123] Gathering logs for describe nodes ...
	I1026 00:46:33.865989   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 00:46:34.001743   18362 logs.go:123] Gathering logs for coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] ...
	I1026 00:46:34.001771   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d"
	I1026 00:46:34.062545   18362 logs.go:123] Gathering logs for kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] ...
	I1026 00:46:34.062582   18362 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354"
	I1026 00:46:36.605290   18362 system_pods.go:59] 18 kube-system pods found
	I1026 00:46:36.605322   18362 system_pods.go:61] "amd-gpu-device-plugin-j7hfs" [998a3db9-77d1-44e5-8056-30bfb299237f] Running
	I1026 00:46:36.605328   18362 system_pods.go:61] "coredns-7c65d6cfc9-rg759" [0fc72168-a4b5-4ffb-a60a-879932edb065] Running
	I1026 00:46:36.605332   18362 system_pods.go:61] "csi-hostpath-attacher-0" [1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0] Running
	I1026 00:46:36.605335   18362 system_pods.go:61] "csi-hostpath-resizer-0" [e305542d-5cae-4b7b-b8eb-8746838c449a] Running
	I1026 00:46:36.605338   18362 system_pods.go:61] "csi-hostpathplugin-klclf" [7c681fc4-5331-4a8c-8836-434972b7501f] Running
	I1026 00:46:36.605341   18362 system_pods.go:61] "etcd-addons-602145" [f01141d1-f024-4f45-b88e-316ef438b6db] Running
	I1026 00:46:36.605344   18362 system_pods.go:61] "kube-apiserver-addons-602145" [1a03095d-dcd7-46b6-bd82-2d57dccd04f4] Running
	I1026 00:46:36.605347   18362 system_pods.go:61] "kube-controller-manager-addons-602145" [3da3edd9-5929-4557-98b6-a308808e4f0e] Running
	I1026 00:46:36.605350   18362 system_pods.go:61] "kube-ingress-dns-minikube" [025a59e5-d16f-4e88-b27a-df9b744f402c] Running
	I1026 00:46:36.605354   18362 system_pods.go:61] "kube-proxy-zmp9p" [a8ec7e5b-66ba-4d78-9fb6-7391387d3926] Running
	I1026 00:46:36.605357   18362 system_pods.go:61] "kube-scheduler-addons-602145" [b97d691f-c7d5-46af-9e01-cce925d7b07a] Running
	I1026 00:46:36.605360   18362 system_pods.go:61] "metrics-server-84c5f94fbc-h4pf5" [d14866cc-8862-49b0-991e-5bebca6ba0c0] Running
	I1026 00:46:36.605363   18362 system_pods.go:61] "nvidia-device-plugin-daemonset-njbmm" [d10ea740-696c-405e-abda-87f78aad39bb] Running
	I1026 00:46:36.605366   18362 system_pods.go:61] "registry-66c9cd494c-pgk2s" [7960692c-0aab-43a0-89c7-aca8e7b3647f] Running
	I1026 00:46:36.605368   18362 system_pods.go:61] "registry-proxy-l5dxz" [d343ebc6-cfcc-44d1-974f-3bb153afc92e] Running
	I1026 00:46:36.605371   18362 system_pods.go:61] "snapshot-controller-56fcc65765-jg7jh" [88ad95c2-df86-4bf5-b748-a0356c7d9668] Running
	I1026 00:46:36.605375   18362 system_pods.go:61] "snapshot-controller-56fcc65765-m4s9s" [29e55a42-07fd-48a7-bef4-fbe602d75ff1] Running
	I1026 00:46:36.605378   18362 system_pods.go:61] "storage-provisioner" [7d49ab38-56fb-43aa-a6b9-153edaf888b2] Running
	I1026 00:46:36.605386   18362 system_pods.go:74] duration metric: took 4.372301823s to wait for pod list to return data ...
	I1026 00:46:36.605395   18362 default_sa.go:34] waiting for default service account to be created ...
	I1026 00:46:36.607661   18362 default_sa.go:45] found service account: "default"
	I1026 00:46:36.607681   18362 default_sa.go:55] duration metric: took 2.281632ms for default service account to be created ...
	I1026 00:46:36.607688   18362 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 00:46:36.614138   18362 system_pods.go:86] 18 kube-system pods found
	I1026 00:46:36.614162   18362 system_pods.go:89] "amd-gpu-device-plugin-j7hfs" [998a3db9-77d1-44e5-8056-30bfb299237f] Running
	I1026 00:46:36.614168   18362 system_pods.go:89] "coredns-7c65d6cfc9-rg759" [0fc72168-a4b5-4ffb-a60a-879932edb065] Running
	I1026 00:46:36.614173   18362 system_pods.go:89] "csi-hostpath-attacher-0" [1b8843c4-1c3a-4b46-a2c7-e623be1a6fd0] Running
	I1026 00:46:36.614176   18362 system_pods.go:89] "csi-hostpath-resizer-0" [e305542d-5cae-4b7b-b8eb-8746838c449a] Running
	I1026 00:46:36.614180   18362 system_pods.go:89] "csi-hostpathplugin-klclf" [7c681fc4-5331-4a8c-8836-434972b7501f] Running
	I1026 00:46:36.614185   18362 system_pods.go:89] "etcd-addons-602145" [f01141d1-f024-4f45-b88e-316ef438b6db] Running
	I1026 00:46:36.614188   18362 system_pods.go:89] "kube-apiserver-addons-602145" [1a03095d-dcd7-46b6-bd82-2d57dccd04f4] Running
	I1026 00:46:36.614194   18362 system_pods.go:89] "kube-controller-manager-addons-602145" [3da3edd9-5929-4557-98b6-a308808e4f0e] Running
	I1026 00:46:36.614201   18362 system_pods.go:89] "kube-ingress-dns-minikube" [025a59e5-d16f-4e88-b27a-df9b744f402c] Running
	I1026 00:46:36.614205   18362 system_pods.go:89] "kube-proxy-zmp9p" [a8ec7e5b-66ba-4d78-9fb6-7391387d3926] Running
	I1026 00:46:36.614211   18362 system_pods.go:89] "kube-scheduler-addons-602145" [b97d691f-c7d5-46af-9e01-cce925d7b07a] Running
	I1026 00:46:36.614214   18362 system_pods.go:89] "metrics-server-84c5f94fbc-h4pf5" [d14866cc-8862-49b0-991e-5bebca6ba0c0] Running
	I1026 00:46:36.614220   18362 system_pods.go:89] "nvidia-device-plugin-daemonset-njbmm" [d10ea740-696c-405e-abda-87f78aad39bb] Running
	I1026 00:46:36.614224   18362 system_pods.go:89] "registry-66c9cd494c-pgk2s" [7960692c-0aab-43a0-89c7-aca8e7b3647f] Running
	I1026 00:46:36.614229   18362 system_pods.go:89] "registry-proxy-l5dxz" [d343ebc6-cfcc-44d1-974f-3bb153afc92e] Running
	I1026 00:46:36.614232   18362 system_pods.go:89] "snapshot-controller-56fcc65765-jg7jh" [88ad95c2-df86-4bf5-b748-a0356c7d9668] Running
	I1026 00:46:36.614236   18362 system_pods.go:89] "snapshot-controller-56fcc65765-m4s9s" [29e55a42-07fd-48a7-bef4-fbe602d75ff1] Running
	I1026 00:46:36.614239   18362 system_pods.go:89] "storage-provisioner" [7d49ab38-56fb-43aa-a6b9-153edaf888b2] Running
	I1026 00:46:36.614247   18362 system_pods.go:126] duration metric: took 6.546085ms to wait for k8s-apps to be running ...
	I1026 00:46:36.614254   18362 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 00:46:36.614296   18362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:46:36.629828   18362 system_svc.go:56] duration metric: took 15.565045ms WaitForService to wait for kubelet
	I1026 00:46:36.629857   18362 kubeadm.go:582] duration metric: took 1m59.104837393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:46:36.629880   18362 node_conditions.go:102] verifying NodePressure condition ...
	I1026 00:46:36.633214   18362 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 00:46:36.633238   18362 node_conditions.go:123] node cpu capacity is 2
	I1026 00:46:36.633250   18362 node_conditions.go:105] duration metric: took 3.365385ms to run NodePressure ...
	I1026 00:46:36.633258   18362 start.go:241] waiting for startup goroutines ...
	I1026 00:46:36.633265   18362 start.go:246] waiting for cluster config update ...
	I1026 00:46:36.633280   18362 start.go:255] writing updated cluster config ...
	I1026 00:46:36.633555   18362 ssh_runner.go:195] Run: rm -f paused
	I1026 00:46:36.681760   18362 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 00:46:36.683796   18362 out.go:177] * Done! kubectl is now configured to use "addons-602145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.067911001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962067880396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dfc84eb-929e-42e3-8099-7214b75f604d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.068716548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9437bdd9-6cc7-43df-9158-b8e79fdbfc0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.068787664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9437bdd9-6cc7-43df-9158-b8e79fdbfc0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.069063909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ccaeb36695ec3f074f80065445013dacf22e6b007209b7f8d81e0af78140858,PodSandboxId:1aa65f557767f09a81a50a4327a66d2b792e06ca124d2566902aa1ae71b672ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729903779958743723,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-kslk2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d68d2841-2c34-4251-9041-77f91bc8ae5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a
4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45efa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9437bdd9-6cc7-43df-9158-b8e79fdbfc0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.104701620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fa30529-e143-47ec-b4ef-be4076737065 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.104787973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fa30529-e143-47ec-b4ef-be4076737065 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.106306865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=920dddee-0db9-4d50-a9b5-a153fe1be641 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.107736888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962107707983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=920dddee-0db9-4d50-a9b5-a153fe1be641 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.108293266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=444eca37-9535-4cea-aa3d-d15b1a37ebc5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.108411121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=444eca37-9535-4cea-aa3d-d15b1a37ebc5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.108727161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ccaeb36695ec3f074f80065445013dacf22e6b007209b7f8d81e0af78140858,PodSandboxId:1aa65f557767f09a81a50a4327a66d2b792e06ca124d2566902aa1ae71b672ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729903779958743723,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-kslk2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d68d2841-2c34-4251-9041-77f91bc8ae5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a
4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45efa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=444eca37-9535-4cea-aa3d-d15b1a37ebc5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.145707066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e047c24-6882-4b4d-92b7-0935e8a7aca1 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.145779579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e047c24-6882-4b4d-92b7-0935e8a7aca1 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.147078439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a53b9cc8-3571-4cb2-a34c-c949cafbe857 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.148729552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962148700309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a53b9cc8-3571-4cb2-a34c-c949cafbe857 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.149360549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28bdd608-20dc-490a-87f5-befefc70b46d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.149420362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28bdd608-20dc-490a-87f5-befefc70b46d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.149681574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ccaeb36695ec3f074f80065445013dacf22e6b007209b7f8d81e0af78140858,PodSandboxId:1aa65f557767f09a81a50a4327a66d2b792e06ca124d2566902aa1ae71b672ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729903779958743723,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-kslk2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d68d2841-2c34-4251-9041-77f91bc8ae5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a
4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45efa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28bdd608-20dc-490a-87f5-befefc70b46d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.181299262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aaedbcb-cac6-4308-97a1-a7fa54d0e8e5 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.181387236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aaedbcb-cac6-4308-97a1-a7fa54d0e8e5 name=/runtime.v1.RuntimeService/Version
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.182271322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e80fed1-0dc3-4ac2-8fa8-928a3dd2a112 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.183487806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962183464132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e80fed1-0dc3-4ac2-8fa8-928a3dd2a112 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.184219806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be1f309c-2d54-40f0-98f4-321c8293ed24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.184275865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be1f309c-2d54-40f0-98f4-321c8293ed24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 00:52:42 addons-602145 crio[665]: time="2024-10-26 00:52:42.184523073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ccaeb36695ec3f074f80065445013dacf22e6b007209b7f8d81e0af78140858,PodSandboxId:1aa65f557767f09a81a50a4327a66d2b792e06ca124d2566902aa1ae71b672ab,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729903779958743723,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-kslk2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d68d2841-2c34-4251-9041-77f91bc8ae5a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbf9cb98ca3ea9f6c504e70dd4022bc4bee4741abf5fd90fbb78325cbf34b5b,PodSandboxId:eeec9e8541b63b4d23e6ac3314f2d8cc441d0d470527ccd5c1f577cac4a8a308,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729903638287727230,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e5facde9-7465-4490-b87c-c7f93997b01b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37fe19245e37ffd0a8139c0ea66e38950788c6b0316d376cf29ea59c859d42bd,PodSandboxId:5339180d9cbb6e020fde7605c5c0a3e81f4542f7837b8d86d05017302ed58e1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729903600716126745,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0906784a-c8dd-47c4-a
4ba-aab93d9d7b86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ea5d941e5422da3f30280e3e8d3a1ea37c2c46b2eb2df4bcc43f94b7cfc29f,PodSandboxId:3214a327c5408dfeeb1b54d623f1321496ff11d27631ba94cd1d0849e8fb798e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729903521813970327,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-h4pf5,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d14866cc-8862-49b0-991e-5bebca6ba0c0,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88738203db74769180ea388511cc83ea799ab512c65750a04d164ec42a394738,PodSandboxId:8fcf2dfac27d8063a5eef0219659c5f86269ffe47e28f2a8d714f14e76b883b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729903495801367284,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-j7hfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998a3db9-77d1-44e5-8056-30bfb299237f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e,PodSandboxId:038c192f80c6a1a26e113d6896fc62d12aa3398726e1071e73135f4aa9471227,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729903484029444937,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d49ab38-56fb-43aa-a6b9-153edaf888b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d,PodSandboxId:2c7291542e5763588d0838ddee45efa5847eff50b53890dc2bc0a39182d11afd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729903480568723106,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rg759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc72168-a4b5-4ffb-a60a-879932edb065,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354,PodSandboxId:346a91f8335e04a118f37fcd80f48f0e43166fa71d24c391099a347f711565ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729903478437440682,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zmp9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ec7e5b-66ba-4d78-9fb6-7391387d3926,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098,PodSandboxId:d6cf525d5366585c1035033b5be477ed5a1574c54d7787c040bfb2fb9824d25d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1
a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729903466148378590,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2421bc00409115f53b62f720e9994707,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e,PodSandboxId:49edc5bc1a50f91ef0fcf42c36725f9e8a7c8400aba0d0e291305bee5eab9f89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729903466170768586,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5709ea146931fa039496c86db864a8e0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1,PodSandboxId:132167104a88683b472e1ce3d2e1b7ca082b9a16a683884768592e4ef267cf0e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1729903466114586012,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3451cd31f76f1d65566f2bc7d1ef70fa,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f,PodSandboxId:38ee77ed691d7f843a114ec6230aa3d8ed0eb6238714187dc0c911a51e43f2b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1729903466104910269,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-602145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb40526d0e1222059735de592c242b33,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be1f309c-2d54-40f0-98f4-321c8293ed24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ccaeb36695ec       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   1aa65f557767f       hello-world-app-55bf9c44b4-kslk2
	2dbf9cb98ca3e       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   eeec9e8541b63       nginx
	37fe19245e37f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   5339180d9cbb6       busybox
	02ea5d941e542       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   3214a327c5408       metrics-server-84c5f94fbc-h4pf5
	88738203db747       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   8fcf2dfac27d8       amd-gpu-device-plugin-j7hfs
	a985b19d6ed2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   038c192f80c6a       storage-provisioner
	5ab5a29a69bd0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   2c7291542e576       coredns-7c65d6cfc9-rg759
	bb77e77566e84       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   346a91f8335e0       kube-proxy-zmp9p
	39fbd6c96fd56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   49edc5bc1a50f       etcd-addons-602145
	6ae7464e87276       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   d6cf525d53665       kube-scheduler-addons-602145
	89dbbaf2f83cd       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   132167104a886       kube-apiserver-addons-602145
	b45da4da24d6a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   38ee77ed691d7       kube-controller-manager-addons-602145
	
	
	==> coredns [5ab5a29a69bd06ca56386e63993108dbd3cb4472b2c66740ec603632a23b0c2d] <==
	[INFO] 10.244.0.22:59765 - 52706 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061097s
	[INFO] 10.244.0.22:55074 - 58703 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079501s
	[INFO] 10.244.0.22:59765 - 58708 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046839s
	[INFO] 10.244.0.22:55074 - 28696 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069247s
	[INFO] 10.244.0.22:59765 - 15677 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033389s
	[INFO] 10.244.0.22:55074 - 51112 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074052s
	[INFO] 10.244.0.22:59765 - 39728 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033005s
	[INFO] 10.244.0.22:55074 - 54061 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102223s
	[INFO] 10.244.0.22:59765 - 57654 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003787s
	[INFO] 10.244.0.22:59765 - 29358 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034616s
	[INFO] 10.244.0.22:59765 - 44115 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040096s
	[INFO] 10.244.0.22:40842 - 19679 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103801s
	[INFO] 10.244.0.22:49508 - 22490 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063307s
	[INFO] 10.244.0.22:40842 - 30112 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038941s
	[INFO] 10.244.0.22:40842 - 45165 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031485s
	[INFO] 10.244.0.22:40842 - 33643 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027566s
	[INFO] 10.244.0.22:40842 - 58213 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003162s
	[INFO] 10.244.0.22:40842 - 45685 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052013s
	[INFO] 10.244.0.22:40842 - 8569 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110437s
	[INFO] 10.244.0.22:49508 - 32575 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000746643s
	[INFO] 10.244.0.22:49508 - 43538 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091678s
	[INFO] 10.244.0.22:49508 - 65526 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071044s
	[INFO] 10.244.0.22:49508 - 31918 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076185s
	[INFO] 10.244.0.22:49508 - 16265 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078883s
	[INFO] 10.244.0.22:49508 - 42040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049366s
	
	
	==> describe nodes <==
	Name:               addons-602145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-602145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=addons-602145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T00_44_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-602145
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 00:44:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-602145
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 00:52:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 00:50:10 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 00:50:10 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 00:50:10 +0000   Sat, 26 Oct 2024 00:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 00:50:10 +0000   Sat, 26 Oct 2024 00:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    addons-602145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fd70c4d5df949d7a6badbd5665220d2
	  System UUID:                8fd70c4d-5df9-49d7-a6ba-dbd5665220d2
	  Boot ID:                    9806ef21-44bc-4e2d-a83d-b2708cb9617e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  default                     hello-world-app-55bf9c44b4-kslk2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 amd-gpu-device-plugin-j7hfs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 coredns-7c65d6cfc9-rg759                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m5s
	  kube-system                 etcd-addons-602145                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m10s
	  kube-system                 kube-apiserver-addons-602145             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-controller-manager-addons-602145    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-proxy-zmp9p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-scheduler-addons-602145             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 metrics-server-84c5f94fbc-h4pf5          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m3s                   kube-proxy       
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node addons-602145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node addons-602145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s (x7 over 8m17s)  kubelet          Node addons-602145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m10s (x2 over 8m11s)  kubelet          Node addons-602145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s (x2 over 8m11s)  kubelet          Node addons-602145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m10s (x2 over 8m11s)  kubelet          Node addons-602145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m9s                   kubelet          Node addons-602145 status is now: NodeReady
	  Normal  RegisteredNode           8m6s                   node-controller  Node addons-602145 event: Registered Node addons-602145 in Controller
	
	
	==> dmesg <==
	[  +0.155867] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.169040] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.151073] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.079085] kauditd_printk_skb: 72 callbacks suppressed
	[Oct26 00:45] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.391242] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.135044] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.740372] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.122217] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.924157] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.255760] kauditd_printk_skb: 7 callbacks suppressed
	[Oct26 00:46] kauditd_printk_skb: 4 callbacks suppressed
	[ +49.236855] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.008183] kauditd_printk_skb: 2 callbacks suppressed
	[Oct26 00:47] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.677992] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.188821] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.281850] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.403719] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.085636] kauditd_printk_skb: 37 callbacks suppressed
	[ +21.586481] kauditd_printk_skb: 2 callbacks suppressed
	[Oct26 00:48] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.876611] kauditd_printk_skb: 7 callbacks suppressed
	[Oct26 00:49] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.316418] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [39fbd6c96fd5654d1274bdf0291d416cd0566fd98c3afeb30e5b93b80257402e] <==
	{"level":"info","ts":"2024-10-26T00:45:52.928886Z","caller":"traceutil/trace.go:171","msg":"trace[1775187850] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5; range_end:; response_count:1; response_revision:1132; }","duration":"100.038339ms","start":"2024-10-26T00:45:52.828840Z","end":"2024-10-26T00:45:52.928879Z","steps":["trace[1775187850] 'agreement among raft nodes before linearized reading'  (duration: 99.981705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:45:52.928959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.13614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:45:52.928971Z","caller":"traceutil/trace.go:171","msg":"trace[1483416801] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"199.149694ms","start":"2024-10-26T00:45:52.729818Z","end":"2024-10-26T00:45:52.928967Z","steps":["trace[1483416801] 'agreement among raft nodes before linearized reading'  (duration: 199.129951ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:45:52.929058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.700659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:45:52.929077Z","caller":"traceutil/trace.go:171","msg":"trace[941261657] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"240.721132ms","start":"2024-10-26T00:45:52.688350Z","end":"2024-10-26T00:45:52.929071Z","steps":["trace[941261657] 'agreement among raft nodes before linearized reading'  (duration: 240.687986ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:46:01.202052Z","caller":"traceutil/trace.go:171","msg":"trace[424660994] linearizableReadLoop","detail":"{readStateIndex:1193; appliedIndex:1192; }","duration":"372.969749ms","start":"2024-10-26T00:46:00.829048Z","end":"2024-10-26T00:46:01.202018Z","steps":["trace[424660994] 'read index received'  (duration: 372.771857ms)","trace[424660994] 'applied index is now lower than readState.Index'  (duration: 197.233µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-26T00:46:01.202192Z","caller":"traceutil/trace.go:171","msg":"trace[1512070454] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"434.780856ms","start":"2024-10-26T00:46:00.767362Z","end":"2024-10-26T00:46:01.202143Z","steps":["trace[1512070454] 'process raft request'  (duration: 434.506289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202290Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.767340Z","time spent":"434.870295ms","remote":"127.0.0.1:57562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" mod_revision:1124 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qwh5wbjtdpl23x2sw7nz73nroq\" > >"}
	{"level":"warn","ts":"2024-10-26T00:46:01.202473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.443796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-10-26T00:46:01.202519Z","caller":"traceutil/trace.go:171","msg":"trace[2107659026] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5; range_end:; response_count:1; response_revision:1158; }","duration":"373.488533ms","start":"2024-10-26T00:46:00.829018Z","end":"2024-10-26T00:46:01.202506Z","steps":["trace[2107659026] 'agreement among raft nodes before linearized reading'  (duration: 373.353549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.828976Z","time spent":"373.559545ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4589,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-h4pf5\" "}
	{"level":"warn","ts":"2024-10-26T00:46:01.202709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.367803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:46:01.202750Z","caller":"traceutil/trace.go:171","msg":"trace[702797044] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1158; }","duration":"343.409789ms","start":"2024-10-26T00:46:00.859334Z","end":"2024-10-26T00:46:01.202744Z","steps":["trace[702797044] 'agreement among raft nodes before linearized reading'  (duration: 343.356732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:46:01.202769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:46:00.859290Z","time spent":"343.473729ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-26T00:46:01.202849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.140277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-26T00:46:01.203561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.43912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-26T00:46:01.203660Z","caller":"traceutil/trace.go:171","msg":"trace[1272583878] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1158; }","duration":"229.541685ms","start":"2024-10-26T00:46:00.974109Z","end":"2024-10-26T00:46:01.203651Z","steps":["trace[1272583878] 'agreement among raft nodes before linearized reading'  (duration: 229.395064ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:46:01.202877Z","caller":"traceutil/trace.go:171","msg":"trace[1896670558] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1158; }","duration":"225.169446ms","start":"2024-10-26T00:46:00.977702Z","end":"2024-10-26T00:46:01.202871Z","steps":["trace[1896670558] 'agreement among raft nodes before linearized reading'  (duration: 225.131529ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T00:47:53.726718Z","caller":"traceutil/trace.go:171","msg":"trace[630696842] transaction","detail":"{read_only:false; response_revision:1696; number_of_response:1; }","duration":"543.229027ms","start":"2024-10-26T00:47:53.183440Z","end":"2024-10-26T00:47:53.726669Z","steps":["trace[630696842] 'process raft request'  (duration: 542.853235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:47:53.726993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:47:53.183426Z","time spent":"543.411058ms","remote":"127.0.0.1:57562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1690 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-26T00:47:53.727337Z","caller":"traceutil/trace.go:171","msg":"trace[1508805861] linearizableReadLoop","detail":"{readStateIndex:1764; appliedIndex:1764; }","duration":"438.615279ms","start":"2024-10-26T00:47:53.288701Z","end":"2024-10-26T00:47:53.727316Z","steps":["trace[1508805861] 'read index received'  (duration: 438.612318ms)","trace[1508805861] 'applied index is now lower than readState.Index'  (duration: 2.494µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T00:47:53.727425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.71083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T00:47:53.727463Z","caller":"traceutil/trace.go:171","msg":"trace[1481220585] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1696; }","duration":"438.757352ms","start":"2024-10-26T00:47:53.288697Z","end":"2024-10-26T00:47:53.727455Z","steps":["trace[1481220585] 'agreement among raft nodes before linearized reading'  (duration: 438.679686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T00:47:53.727497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T00:47:53.288665Z","time spent":"438.825981ms","remote":"127.0.0.1:57486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-26T00:47:53.733724Z","caller":"traceutil/trace.go:171","msg":"trace[1113148110] transaction","detail":"{read_only:false; response_revision:1697; number_of_response:1; }","duration":"263.243814ms","start":"2024-10-26T00:47:53.470469Z","end":"2024-10-26T00:47:53.733713Z","steps":["trace[1113148110] 'process raft request'  (duration: 263.178473ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:52:42 up 8 min,  0 users,  load average: 0.22, 0.50, 0.39
	Linux addons-602145 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89dbbaf2f83cda0d190ea4c83f6cb412dbabc98f42d44ee172926701a6978bf1] <==
	E1026 00:46:23.181687       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.115.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.115.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.115.151:443: connect: connection refused" logger="UnhandledError"
	I1026 00:46:23.253271       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 00:46:48.392559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.207:8443->192.168.39.1:57720: use of closed network connection
	E1026 00:46:48.567953       1 conn.go:339] Error on socket receive: read tcp 192.168.39.207:8443->192.168.39.1:57752: use of closed network connection
	I1026 00:46:57.572332       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.223.238"}
	I1026 00:47:03.552134       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1026 00:47:04.692366       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1026 00:47:15.695350       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 00:47:15.873063       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.125.48"}
	E1026 00:47:46.184324       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1026 00:48:01.354681       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 00:48:21.482370       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.485759       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.517099       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.519247       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.526923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.534421       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.605103       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.605189       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 00:48:21.657221       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 00:48:21.657265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1026 00:48:22.605332       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1026 00:48:22.659513       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1026 00:48:22.673063       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1026 00:49:37.301316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.186.193"}
	
	
	==> kube-controller-manager [b45da4da24d6a154128b3fca10088e97cdd19dc172aadd8937085d5060a08d7f] <==
	E1026 00:50:13.488893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:50:15.930215       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:50:15.930345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:50:26.127622       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:50:26.127795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:50:34.495989       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:50:34.496083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:50:50.369473       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:50:50.369595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:06.223815       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:06.224002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:10.769330       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:10.769506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:16.437194       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:16.437301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:32.904908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:32.905055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:47.724675       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:47.724786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:51:51.575929       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:51:51.575964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:52:06.658009       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:52:06.658055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1026 00:52:31.714082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1026 00:52:31.714356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [bb77e77566e84f72b0be8af76434bba759fb58a33c2d234de0538c9e15420354] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 00:44:39.155819       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 00:44:39.171972       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.207"]
	E1026 00:44:39.172047       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 00:44:39.251183       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 00:44:39.251240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 00:44:39.251274       1 server_linux.go:169] "Using iptables Proxier"
	I1026 00:44:39.256651       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 00:44:39.256918       1 server.go:483] "Version info" version="v1.31.2"
	I1026 00:44:39.256933       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:44:39.258554       1 config.go:199] "Starting service config controller"
	I1026 00:44:39.258565       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 00:44:39.258587       1 config.go:105] "Starting endpoint slice config controller"
	I1026 00:44:39.258591       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 00:44:39.258973       1 config.go:328] "Starting node config controller"
	I1026 00:44:39.258983       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 00:44:39.358662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 00:44:39.358664       1 shared_informer.go:320] Caches are synced for service config
	I1026 00:44:39.359019       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ae7464e87276b472126cb7a14a70be2775029724dec91419250c2a1c4b61098] <==
	W1026 00:44:29.847722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:44:29.847813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.863558       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:29.863642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.913398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:44:29.913542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:29.989962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 00:44:29.990009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.081203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:44:30.081247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.083106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:30.083242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.111289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 00:44:30.111396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.168045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 00:44:30.168182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.182712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:44:30.182756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.212887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:44:30.212985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.318962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:44:30.319068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1026 00:44:30.358977       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 00:44:30.359101       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1026 00:44:33.502316       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 00:51:32 addons-602145 kubelet[1194]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 00:51:32 addons-602145 kubelet[1194]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 00:51:32 addons-602145 kubelet[1194]: E1026 00:51:32.256076    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903892255442053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:51:32 addons-602145 kubelet[1194]: E1026 00:51:32.256102    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903892255442053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:51:42 addons-602145 kubelet[1194]: E1026 00:51:42.258890    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903902258382353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:51:42 addons-602145 kubelet[1194]: E1026 00:51:42.259284    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903902258382353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:51:52 addons-602145 kubelet[1194]: E1026 00:51:52.263986    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903912263240738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:51:52 addons-602145 kubelet[1194]: E1026 00:51:52.264024    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903912263240738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:02 addons-602145 kubelet[1194]: E1026 00:52:02.266487    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903922266109531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:02 addons-602145 kubelet[1194]: E1026 00:52:02.266803    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903922266109531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:08 addons-602145 kubelet[1194]: I1026 00:52:08.999565    1194 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 00:52:12 addons-602145 kubelet[1194]: E1026 00:52:12.269807    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903932269331223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:12 addons-602145 kubelet[1194]: E1026 00:52:12.270183    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903932269331223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:22 addons-602145 kubelet[1194]: E1026 00:52:22.274792    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903942274219008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:22 addons-602145 kubelet[1194]: E1026 00:52:22.274832    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903942274219008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:32 addons-602145 kubelet[1194]: E1026 00:52:32.027857    1194 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 00:52:32 addons-602145 kubelet[1194]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 00:52:32 addons-602145 kubelet[1194]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 00:52:32 addons-602145 kubelet[1194]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 00:52:32 addons-602145 kubelet[1194]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 00:52:32 addons-602145 kubelet[1194]: E1026 00:52:32.278042    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903952277574939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:32 addons-602145 kubelet[1194]: E1026 00:52:32.278136    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903952277574939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:32 addons-602145 kubelet[1194]: I1026 00:52:32.999127    1194 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j7hfs" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 00:52:42 addons-602145 kubelet[1194]: E1026 00:52:42.281100    1194 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962280645030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 00:52:42 addons-602145 kubelet[1194]: E1026 00:52:42.281173    1194 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729903962280645030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596173,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a985b19d6ed2ebeca4d33799da388cff6c896a67b1792cfb837d44bd1cbdd34e] <==
	I1026 00:44:44.883768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:44:45.125074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:44:45.152428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:44:45.196556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:44:45.196784       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-602145_344be09d-51c7-4147-a809-375a65a491de!
	I1026 00:44:45.196835       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31e0f04d-eb9c-4d94-9942-69ec8f9e4cfa", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-602145_344be09d-51c7-4147-a809-375a65a491de became leader
	I1026 00:44:45.298279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-602145_344be09d-51c7-4147-a809-375a65a491de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-602145 -n addons-602145
helpers_test.go:261: (dbg) Run:  kubectl --context addons-602145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (347.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-602145
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-602145: exit status 82 (2m0.446046294s)

                                                
                                                
-- stdout --
	* Stopping node "addons-602145"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-602145" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-602145
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-602145: exit status 11 (21.603540082s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-602145" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-602145
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-602145: exit status 11 (6.144457569s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-602145" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-602145
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-602145: exit status 11 (6.143915793s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.207:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-602145" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 node stop m02 -v=7 --alsologtostderr
E1026 01:04:33.938604   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:05:14.900068   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-300623 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.480664391s)

                                                
                                                
-- stdout --
	* Stopping node "ha-300623-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:04:13.887323   31963 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:04:13.887467   31963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:04:13.887478   31963 out.go:358] Setting ErrFile to fd 2...
	I1026 01:04:13.887482   31963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:04:13.887685   31963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:04:13.887922   31963 mustload.go:65] Loading cluster: ha-300623
	I1026 01:04:13.888318   31963 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:04:13.888333   31963 stop.go:39] StopHost: ha-300623-m02
	I1026 01:04:13.888676   31963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:04:13.888728   31963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:04:13.905333   31963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1026 01:04:13.905878   31963 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:04:13.906421   31963 main.go:141] libmachine: Using API Version  1
	I1026 01:04:13.906439   31963 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:04:13.906812   31963 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:04:13.909227   31963 out.go:177] * Stopping node "ha-300623-m02"  ...
	I1026 01:04:13.910439   31963 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:04:13.910487   31963 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:04:13.910745   31963 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:04:13.910784   31963 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:04:13.913623   31963 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:04:13.914114   31963 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:04:13.914147   31963 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:04:13.914323   31963 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:04:13.914519   31963 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:04:13.914672   31963 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:04:13.914824   31963 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:04:14.005925   31963 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:04:14.058116   31963 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:04:14.114864   31963 main.go:141] libmachine: Stopping "ha-300623-m02"...
	I1026 01:04:14.114888   31963 main.go:141] libmachine: (ha-300623-m02) Calling .GetState
	I1026 01:04:14.116330   31963 main.go:141] libmachine: (ha-300623-m02) Calling .Stop
	I1026 01:04:14.120654   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 0/120
	I1026 01:04:15.122072   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 1/120
	I1026 01:04:16.123581   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 2/120
	I1026 01:04:17.124751   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 3/120
	I1026 01:04:18.126071   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 4/120
	I1026 01:04:19.127579   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 5/120
	I1026 01:04:20.128939   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 6/120
	I1026 01:04:21.130188   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 7/120
	I1026 01:04:22.131680   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 8/120
	I1026 01:04:23.132864   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 9/120
	I1026 01:04:24.135169   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 10/120
	I1026 01:04:25.136441   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 11/120
	I1026 01:04:26.138255   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 12/120
	I1026 01:04:27.139526   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 13/120
	I1026 01:04:28.140748   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 14/120
	I1026 01:04:29.143102   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 15/120
	I1026 01:04:30.144386   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 16/120
	I1026 01:04:31.145839   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 17/120
	I1026 01:04:32.148108   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 18/120
	I1026 01:04:33.149565   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 19/120
	I1026 01:04:34.151679   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 20/120
	I1026 01:04:35.152975   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 21/120
	I1026 01:04:36.154394   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 22/120
	I1026 01:04:37.155823   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 23/120
	I1026 01:04:38.157106   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 24/120
	I1026 01:04:39.158750   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 25/120
	I1026 01:04:40.160716   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 26/120
	I1026 01:04:41.162205   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 27/120
	I1026 01:04:42.163951   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 28/120
	I1026 01:04:43.166083   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 29/120
	I1026 01:04:44.168395   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 30/120
	I1026 01:04:45.170013   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 31/120
	I1026 01:04:46.171174   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 32/120
	I1026 01:04:47.172465   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 33/120
	I1026 01:04:48.174564   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 34/120
	I1026 01:04:49.176736   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 35/120
	I1026 01:04:50.179011   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 36/120
	I1026 01:04:51.181219   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 37/120
	I1026 01:04:52.182985   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 38/120
	I1026 01:04:53.184349   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 39/120
	I1026 01:04:54.186067   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 40/120
	I1026 01:04:55.188100   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 41/120
	I1026 01:04:56.189582   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 42/120
	I1026 01:04:57.191998   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 43/120
	I1026 01:04:58.193332   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 44/120
	I1026 01:04:59.195286   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 45/120
	I1026 01:05:00.196865   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 46/120
	I1026 01:05:01.198127   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 47/120
	I1026 01:05:02.200338   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 48/120
	I1026 01:05:03.202128   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 49/120
	I1026 01:05:04.204383   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 50/120
	I1026 01:05:05.206030   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 51/120
	I1026 01:05:06.208305   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 52/120
	I1026 01:05:07.210425   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 53/120
	I1026 01:05:08.211993   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 54/120
	I1026 01:05:09.213779   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 55/120
	I1026 01:05:10.216152   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 56/120
	I1026 01:05:11.218023   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 57/120
	I1026 01:05:12.219944   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 58/120
	I1026 01:05:13.221242   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 59/120
	I1026 01:05:14.223732   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 60/120
	I1026 01:05:15.225255   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 61/120
	I1026 01:05:16.226566   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 62/120
	I1026 01:05:17.228028   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 63/120
	I1026 01:05:18.229340   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 64/120
	I1026 01:05:19.231371   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 65/120
	I1026 01:05:20.232790   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 66/120
	I1026 01:05:21.234054   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 67/120
	I1026 01:05:22.235569   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 68/120
	I1026 01:05:23.237455   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 69/120
	I1026 01:05:24.239573   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 70/120
	I1026 01:05:25.241791   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 71/120
	I1026 01:05:26.243154   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 72/120
	I1026 01:05:27.244903   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 73/120
	I1026 01:05:28.246588   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 74/120
	I1026 01:05:29.248708   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 75/120
	I1026 01:05:30.250498   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 76/120
	I1026 01:05:31.252209   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 77/120
	I1026 01:05:32.254173   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 78/120
	I1026 01:05:33.256095   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 79/120
	I1026 01:05:34.257884   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 80/120
	I1026 01:05:35.259797   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 81/120
	I1026 01:05:36.261322   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 82/120
	I1026 01:05:37.262764   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 83/120
	I1026 01:05:38.264360   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 84/120
	I1026 01:05:39.266220   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 85/120
	I1026 01:05:40.267819   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 86/120
	I1026 01:05:41.269323   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 87/120
	I1026 01:05:42.271055   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 88/120
	I1026 01:05:43.272784   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 89/120
	I1026 01:05:44.274630   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 90/120
	I1026 01:05:45.275838   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 91/120
	I1026 01:05:46.278084   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 92/120
	I1026 01:05:47.279227   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 93/120
	I1026 01:05:48.280601   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 94/120
	I1026 01:05:49.283110   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 95/120
	I1026 01:05:50.284473   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 96/120
	I1026 01:05:51.285608   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 97/120
	I1026 01:05:52.288156   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 98/120
	I1026 01:05:53.289474   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 99/120
	I1026 01:05:54.291494   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 100/120
	I1026 01:05:55.292819   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 101/120
	I1026 01:05:56.293968   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 102/120
	I1026 01:05:57.295748   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 103/120
	I1026 01:05:58.296999   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 104/120
	I1026 01:05:59.298363   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 105/120
	I1026 01:06:00.300278   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 106/120
	I1026 01:06:01.301663   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 107/120
	I1026 01:06:02.303997   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 108/120
	I1026 01:06:03.305199   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 109/120
	I1026 01:06:04.307026   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 110/120
	I1026 01:06:05.308293   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 111/120
	I1026 01:06:06.310053   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 112/120
	I1026 01:06:07.311229   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 113/120
	I1026 01:06:08.312643   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 114/120
	I1026 01:06:09.314755   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 115/120
	I1026 01:06:10.315911   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 116/120
	I1026 01:06:11.317663   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 117/120
	I1026 01:06:12.319068   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 118/120
	I1026 01:06:13.320502   31963 main.go:141] libmachine: (ha-300623-m02) Waiting for machine to stop 119/120
	I1026 01:06:14.321247   31963 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 01:06:14.321446   31963 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-300623 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr: (18.684455392s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (1.332119183s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m03_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:59:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:59:41.102327   27934 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:41.102422   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102427   27934 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:41.102431   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102629   27934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:41.103175   27934 out.go:352] Setting JSON to false
	I1026 00:59:41.103986   27934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2521,"bootTime":1729901860,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:41.104085   27934 start.go:139] virtualization: kvm guest
	I1026 00:59:41.106060   27934 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:41.107343   27934 notify.go:220] Checking for updates...
	I1026 00:59:41.107361   27934 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:41.108566   27934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:41.109853   27934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:41.111166   27934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.112531   27934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:41.113798   27934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:41.115167   27934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:41.148833   27934 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:59:41.150115   27934 start.go:297] selected driver: kvm2
	I1026 00:59:41.150128   27934 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:59:41.150139   27934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:41.150812   27934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.150910   27934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:59:41.165692   27934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:59:41.165750   27934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:59:41.166043   27934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:59:41.166082   27934 cni.go:84] Creating CNI manager for ""
	I1026 00:59:41.166138   27934 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1026 00:59:41.166151   27934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:59:41.166210   27934 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:41.166340   27934 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.168250   27934 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 00:59:41.169625   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:59:41.169671   27934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:59:41.169699   27934 cache.go:56] Caching tarball of preloaded images
	I1026 00:59:41.169771   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:59:41.169781   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:59:41.170066   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 00:59:41.170083   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json: {Name:mkc18d341848fb714503df8b4bfc42be69331fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:59:41.170205   27934 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:59:41.170231   27934 start.go:364] duration metric: took 14.614µs to acquireMachinesLock for "ha-300623"
	I1026 00:59:41.170247   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:59:41.170298   27934 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:59:41.171896   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 00:59:41.172034   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:41.172078   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:41.186522   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I1026 00:59:41.186988   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:41.187517   27934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:41.187539   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:41.187925   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:41.188146   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 00:59:41.188284   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 00:59:41.188436   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 00:59:41.188472   27934 client.go:168] LocalClient.Create starting
	I1026 00:59:41.188506   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:59:41.188539   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188554   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188604   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:59:41.188622   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188635   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188652   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 00:59:41.188664   27934 main.go:141] libmachine: (ha-300623) Calling .PreCreateCheck
	I1026 00:59:41.189023   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 00:59:41.189374   27934 main.go:141] libmachine: Creating machine...
	I1026 00:59:41.189386   27934 main.go:141] libmachine: (ha-300623) Calling .Create
	I1026 00:59:41.189526   27934 main.go:141] libmachine: (ha-300623) Creating KVM machine...
	I1026 00:59:41.190651   27934 main.go:141] libmachine: (ha-300623) DBG | found existing default KVM network
	I1026 00:59:41.191301   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.191170   27957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1026 00:59:41.191329   27934 main.go:141] libmachine: (ha-300623) DBG | created network xml: 
	I1026 00:59:41.191339   27934 main.go:141] libmachine: (ha-300623) DBG | <network>
	I1026 00:59:41.191366   27934 main.go:141] libmachine: (ha-300623) DBG |   <name>mk-ha-300623</name>
	I1026 00:59:41.191399   27934 main.go:141] libmachine: (ha-300623) DBG |   <dns enable='no'/>
	I1026 00:59:41.191415   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191424   27934 main.go:141] libmachine: (ha-300623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:59:41.191431   27934 main.go:141] libmachine: (ha-300623) DBG |     <dhcp>
	I1026 00:59:41.191438   27934 main.go:141] libmachine: (ha-300623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:59:41.191445   27934 main.go:141] libmachine: (ha-300623) DBG |     </dhcp>
	I1026 00:59:41.191450   27934 main.go:141] libmachine: (ha-300623) DBG |   </ip>
	I1026 00:59:41.191457   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191462   27934 main.go:141] libmachine: (ha-300623) DBG | </network>
	I1026 00:59:41.191489   27934 main.go:141] libmachine: (ha-300623) DBG | 
	I1026 00:59:41.196331   27934 main.go:141] libmachine: (ha-300623) DBG | trying to create private KVM network mk-ha-300623 192.168.39.0/24...
	I1026 00:59:41.258139   27934 main.go:141] libmachine: (ha-300623) DBG | private KVM network mk-ha-300623 192.168.39.0/24 created
	I1026 00:59:41.258172   27934 main.go:141] libmachine: (ha-300623) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.258186   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.258104   27957 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.258203   27934 main.go:141] libmachine: (ha-300623) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:59:41.258226   27934 main.go:141] libmachine: (ha-300623) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:59:41.511971   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.511837   27957 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa...
	I1026 00:59:41.679961   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679835   27957 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk...
	I1026 00:59:41.680008   27934 main.go:141] libmachine: (ha-300623) DBG | Writing magic tar header
	I1026 00:59:41.680023   27934 main.go:141] libmachine: (ha-300623) DBG | Writing SSH key tar header
	I1026 00:59:41.680037   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679951   27957 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.680109   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623
	I1026 00:59:41.680139   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:59:41.680156   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 (perms=drwx------)
	I1026 00:59:41.680166   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.680185   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:59:41.680194   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:59:41.680209   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:59:41.680219   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home
	I1026 00:59:41.680230   27934 main.go:141] libmachine: (ha-300623) DBG | Skipping /home - not owner
	I1026 00:59:41.680244   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:59:41.680257   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:59:41.680313   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:59:41.680344   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:59:41.680359   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:59:41.680367   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:41.681340   27934 main.go:141] libmachine: (ha-300623) define libvirt domain using xml: 
	I1026 00:59:41.681362   27934 main.go:141] libmachine: (ha-300623) <domain type='kvm'>
	I1026 00:59:41.681370   27934 main.go:141] libmachine: (ha-300623)   <name>ha-300623</name>
	I1026 00:59:41.681381   27934 main.go:141] libmachine: (ha-300623)   <memory unit='MiB'>2200</memory>
	I1026 00:59:41.681403   27934 main.go:141] libmachine: (ha-300623)   <vcpu>2</vcpu>
	I1026 00:59:41.681438   27934 main.go:141] libmachine: (ha-300623)   <features>
	I1026 00:59:41.681448   27934 main.go:141] libmachine: (ha-300623)     <acpi/>
	I1026 00:59:41.681452   27934 main.go:141] libmachine: (ha-300623)     <apic/>
	I1026 00:59:41.681457   27934 main.go:141] libmachine: (ha-300623)     <pae/>
	I1026 00:59:41.681471   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681479   27934 main.go:141] libmachine: (ha-300623)   </features>
	I1026 00:59:41.681484   27934 main.go:141] libmachine: (ha-300623)   <cpu mode='host-passthrough'>
	I1026 00:59:41.681489   27934 main.go:141] libmachine: (ha-300623)   
	I1026 00:59:41.681494   27934 main.go:141] libmachine: (ha-300623)   </cpu>
	I1026 00:59:41.681500   27934 main.go:141] libmachine: (ha-300623)   <os>
	I1026 00:59:41.681504   27934 main.go:141] libmachine: (ha-300623)     <type>hvm</type>
	I1026 00:59:41.681512   27934 main.go:141] libmachine: (ha-300623)     <boot dev='cdrom'/>
	I1026 00:59:41.681520   27934 main.go:141] libmachine: (ha-300623)     <boot dev='hd'/>
	I1026 00:59:41.681528   27934 main.go:141] libmachine: (ha-300623)     <bootmenu enable='no'/>
	I1026 00:59:41.681532   27934 main.go:141] libmachine: (ha-300623)   </os>
	I1026 00:59:41.681539   27934 main.go:141] libmachine: (ha-300623)   <devices>
	I1026 00:59:41.681544   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='cdrom'>
	I1026 00:59:41.681575   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/boot2docker.iso'/>
	I1026 00:59:41.681594   27934 main.go:141] libmachine: (ha-300623)       <target dev='hdc' bus='scsi'/>
	I1026 00:59:41.681606   27934 main.go:141] libmachine: (ha-300623)       <readonly/>
	I1026 00:59:41.681615   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681625   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='disk'>
	I1026 00:59:41.681635   27934 main.go:141] libmachine: (ha-300623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:59:41.681651   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk'/>
	I1026 00:59:41.681664   27934 main.go:141] libmachine: (ha-300623)       <target dev='hda' bus='virtio'/>
	I1026 00:59:41.681675   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681686   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681698   27934 main.go:141] libmachine: (ha-300623)       <source network='mk-ha-300623'/>
	I1026 00:59:41.681709   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681719   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681734   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681746   27934 main.go:141] libmachine: (ha-300623)       <source network='default'/>
	I1026 00:59:41.681756   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681773   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681784   27934 main.go:141] libmachine: (ha-300623)     <serial type='pty'>
	I1026 00:59:41.681794   27934 main.go:141] libmachine: (ha-300623)       <target port='0'/>
	I1026 00:59:41.681803   27934 main.go:141] libmachine: (ha-300623)     </serial>
	I1026 00:59:41.681813   27934 main.go:141] libmachine: (ha-300623)     <console type='pty'>
	I1026 00:59:41.681823   27934 main.go:141] libmachine: (ha-300623)       <target type='serial' port='0'/>
	I1026 00:59:41.681835   27934 main.go:141] libmachine: (ha-300623)     </console>
	I1026 00:59:41.681847   27934 main.go:141] libmachine: (ha-300623)     <rng model='virtio'>
	I1026 00:59:41.681861   27934 main.go:141] libmachine: (ha-300623)       <backend model='random'>/dev/random</backend>
	I1026 00:59:41.681876   27934 main.go:141] libmachine: (ha-300623)     </rng>
	I1026 00:59:41.681884   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681893   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681902   27934 main.go:141] libmachine: (ha-300623)   </devices>
	I1026 00:59:41.681910   27934 main.go:141] libmachine: (ha-300623) </domain>
	I1026 00:59:41.681919   27934 main.go:141] libmachine: (ha-300623) 
	I1026 00:59:41.685794   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:bc:3c:c8 in network default
	I1026 00:59:41.686289   27934 main.go:141] libmachine: (ha-300623) Ensuring networks are active...
	I1026 00:59:41.686312   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:41.686908   27934 main.go:141] libmachine: (ha-300623) Ensuring network default is active
	I1026 00:59:41.687318   27934 main.go:141] libmachine: (ha-300623) Ensuring network mk-ha-300623 is active
	I1026 00:59:41.687714   27934 main.go:141] libmachine: (ha-300623) Getting domain xml...
	I1026 00:59:41.688278   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:42.865174   27934 main.go:141] libmachine: (ha-300623) Waiting to get IP...
	I1026 00:59:42.866030   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:42.866436   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:42.866478   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:42.866424   27957 retry.go:31] will retry after 310.395452ms: waiting for machine to come up
	I1026 00:59:43.178911   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.179377   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.179517   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.179326   27957 retry.go:31] will retry after 258.757335ms: waiting for machine to come up
	I1026 00:59:43.439460   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.439855   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.439883   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.439810   27957 retry.go:31] will retry after 476.137443ms: waiting for machine to come up
	I1026 00:59:43.917472   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.917875   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.917910   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.917853   27957 retry.go:31] will retry after 411.866237ms: waiting for machine to come up
	I1026 00:59:44.331261   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.331762   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.331800   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.331724   27957 retry.go:31] will retry after 639.236783ms: waiting for machine to come up
	I1026 00:59:44.972039   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.972415   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.972443   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.972363   27957 retry.go:31] will retry after 943.318782ms: waiting for machine to come up
	I1026 00:59:45.917370   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:45.917808   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:45.917870   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:45.917775   27957 retry.go:31] will retry after 1.007000764s: waiting for machine to come up
	I1026 00:59:46.926545   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:46.926930   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:46.926955   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:46.926890   27957 retry.go:31] will retry after 905.175073ms: waiting for machine to come up
	I1026 00:59:47.834112   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:47.834468   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:47.834505   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:47.834452   27957 retry.go:31] will retry after 1.696390131s: waiting for machine to come up
	I1026 00:59:49.533204   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:49.533596   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:49.533625   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:49.533577   27957 retry.go:31] will retry after 2.087564363s: waiting for machine to come up
	I1026 00:59:51.622505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:51.622952   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:51.623131   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:51.622900   27957 retry.go:31] will retry after 2.813881441s: waiting for machine to come up
	I1026 00:59:54.439730   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:54.440081   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:54.440111   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:54.440045   27957 retry.go:31] will retry after 2.560428672s: waiting for machine to come up
	I1026 00:59:57.002066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:57.002394   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:57.002424   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:57.002352   27957 retry.go:31] will retry after 3.377744145s: waiting for machine to come up
	I1026 01:00:00.384015   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384460   27934 main.go:141] libmachine: (ha-300623) Found IP for machine: 192.168.39.183
	I1026 01:00:00.384479   27934 main.go:141] libmachine: (ha-300623) Reserving static IP address...
	I1026 01:00:00.384505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has current primary IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384856   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find host DHCP lease matching {name: "ha-300623", mac: "52:54:00:4d:a0:46", ip: "192.168.39.183"} in network mk-ha-300623
	I1026 01:00:00.455221   27934 main.go:141] libmachine: (ha-300623) DBG | Getting to WaitForSSH function...
	I1026 01:00:00.455245   27934 main.go:141] libmachine: (ha-300623) Reserved static IP address: 192.168.39.183
	I1026 01:00:00.455253   27934 main.go:141] libmachine: (ha-300623) Waiting for SSH to be available...
	I1026 01:00:00.457760   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458200   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.458223   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458402   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH client type: external
	I1026 01:00:00.458428   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa (-rw-------)
	I1026 01:00:00.458460   27934 main.go:141] libmachine: (ha-300623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:00.458475   27934 main.go:141] libmachine: (ha-300623) DBG | About to run SSH command:
	I1026 01:00:00.458487   27934 main.go:141] libmachine: (ha-300623) DBG | exit 0
	I1026 01:00:00.585473   27934 main.go:141] libmachine: (ha-300623) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:00.585717   27934 main.go:141] libmachine: (ha-300623) KVM machine creation complete!
	I1026 01:00:00.586041   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:00.586564   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586735   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586856   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:00.586870   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:00.588144   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:00.588156   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:00.588161   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:00.588166   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.590434   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590800   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.590815   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590958   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.591118   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591416   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.591579   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.591799   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.591812   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:00.700544   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:00.700568   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:00.700586   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.703305   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703686   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.703708   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703827   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.704016   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704163   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704286   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.704450   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.704607   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.704617   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:00.813937   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:00.814027   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:00.814042   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:00.814078   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814305   27934 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:00:00.814333   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814495   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.817076   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817394   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.817438   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817578   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.817764   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.817892   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.818015   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.818165   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.818334   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.818344   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:00:00.943069   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:00:00.943097   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.946005   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946325   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.946354   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946524   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.946840   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947004   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947144   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.947328   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.947549   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.947572   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:01.065899   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:01.065958   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:01.066012   27934 buildroot.go:174] setting up certificates
	I1026 01:00:01.066027   27934 provision.go:84] configureAuth start
	I1026 01:00:01.066042   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:01.066285   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.069069   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069397   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.069440   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069574   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.071665   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072025   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.072053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072211   27934 provision.go:143] copyHostCerts
	I1026 01:00:01.072292   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072346   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:01.072359   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072430   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:01.072514   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:01.072540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072577   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:01.072670   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072703   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:01.072711   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072743   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:01.072808   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:00:01.133729   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:01.133783   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:01.133804   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.136311   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136591   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.136617   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136770   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.136937   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.137059   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.137192   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.222921   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:01.222983   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:01.245372   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:01.245444   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:00:01.267891   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:01.267957   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:00:01.289667   27934 provision.go:87] duration metric: took 223.628307ms to configureAuth
	I1026 01:00:01.289699   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:01.289880   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:01.289953   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.292672   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.292982   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.293012   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.293184   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.293375   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293624   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293732   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.293904   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.294111   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.294137   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:01.522070   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:01.522096   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:01.522103   27934 main.go:141] libmachine: (ha-300623) Calling .GetURL
	I1026 01:00:01.523378   27934 main.go:141] libmachine: (ha-300623) DBG | Using libvirt version 6000000
	I1026 01:00:01.525286   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525641   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.525670   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525803   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:01.525822   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:01.525829   27934 client.go:171] duration metric: took 20.337349207s to LocalClient.Create
	I1026 01:00:01.525853   27934 start.go:167] duration metric: took 20.337416513s to libmachine.API.Create "ha-300623"
	I1026 01:00:01.525867   27934 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:00:01.525878   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:01.525899   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.526150   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:01.526178   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.528275   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528583   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.528614   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528742   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.528907   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.529035   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.529169   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.615528   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:01.619526   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:01.619547   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:01.619607   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:01.619676   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:01.619685   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:01.619772   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:01.628818   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:01.651055   27934 start.go:296] duration metric: took 125.175871ms for postStartSetup
	I1026 01:00:01.651106   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:01.651707   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.654048   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654337   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.654358   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654637   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:01.654812   27934 start.go:128] duration metric: took 20.484504528s to createHost
	I1026 01:00:01.654833   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.656877   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657252   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.657277   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657399   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.657609   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657759   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.657999   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.658194   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.658205   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:01.770028   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904401.731044736
	
	I1026 01:00:01.770051   27934 fix.go:216] guest clock: 1729904401.731044736
	I1026 01:00:01.770074   27934 fix.go:229] Guest: 2024-10-26 01:00:01.731044736 +0000 UTC Remote: 2024-10-26 01:00:01.654822884 +0000 UTC m=+20.590184391 (delta=76.221852ms)
	I1026 01:00:01.770101   27934 fix.go:200] guest clock delta is within tolerance: 76.221852ms
	I1026 01:00:01.770108   27934 start.go:83] releasing machines lock for "ha-300623", held for 20.599868049s
	I1026 01:00:01.770184   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.770452   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.772669   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773035   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.773066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773320   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773757   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773942   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.774055   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:01.774095   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.774157   27934 ssh_runner.go:195] Run: cat /version.json
	I1026 01:00:01.774180   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.776503   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776822   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.776846   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776862   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777013   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777160   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777266   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.777287   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777476   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777463   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.777588   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777703   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777819   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.889672   27934 ssh_runner.go:195] Run: systemctl --version
	I1026 01:00:01.895441   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:02.062750   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:02.068559   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:02.068640   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:02.085755   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:02.085784   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:02.085879   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:02.103715   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:02.116629   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:02.116698   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:02.129921   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:02.143297   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:02.262539   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:02.410776   27934 docker.go:233] disabling docker service ...
	I1026 01:00:02.410852   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:02.425252   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:02.438874   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:02.567343   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:02.692382   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:02.705780   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:02.723128   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:02.723196   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.733126   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:02.733204   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.743104   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.752720   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.762245   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:02.772039   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.781522   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.797499   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.807723   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:02.816764   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:02.816838   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:02.830364   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:02.840309   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:02.959488   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:03.048870   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:03.048952   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:03.053750   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:03.053801   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:03.057147   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:03.096489   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:03.096564   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.124313   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.153078   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:03.154469   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:03.157053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157290   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:03.157320   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157571   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:03.161502   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:03.173922   27934 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:00:03.174024   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:03.174067   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:03.205502   27934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 01:00:03.205563   27934 ssh_runner.go:195] Run: which lz4
	I1026 01:00:03.209242   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1026 01:00:03.209334   27934 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:00:03.213268   27934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:00:03.213294   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 01:00:04.450368   27934 crio.go:462] duration metric: took 1.241064009s to copy over tarball
	I1026 01:00:04.450448   27934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:00:06.473538   27934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023056026s)
	I1026 01:00:06.473572   27934 crio.go:469] duration metric: took 2.023171959s to extract the tarball
	I1026 01:00:06.473605   27934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:00:06.509382   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:06.550351   27934 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:00:06.550371   27934 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:00:06.550379   27934 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:00:06.550479   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:06.550540   27934 ssh_runner.go:195] Run: crio config
	I1026 01:00:06.601899   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:06.601920   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:06.601928   27934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:00:06.601953   27934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:00:06.602065   27934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:00:06.602090   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:06.602134   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:06.618905   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:06.619004   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:06.619054   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:06.628422   27934 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:00:06.628482   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:00:06.637507   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:00:06.653506   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:06.669385   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:00:06.685316   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1026 01:00:06.701298   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:06.704780   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:06.716358   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:06.835294   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:06.851617   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:00:06.851643   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:06.851663   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:06.851825   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:06.851928   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:06.851951   27934 certs.go:256] generating profile certs ...
	I1026 01:00:06.852032   27934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:06.852053   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt with IP's: []
	I1026 01:00:07.025844   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt ...
	I1026 01:00:07.025878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt: {Name:mk0969781384c8eb24d904330417d9f7d1f6988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026073   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key ...
	I1026 01:00:07.026087   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key: {Name:mkbd66f66cfdc11b06ed7ee27efeab2c35691371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026190   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a
	I1026 01:00:07.026206   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1026 01:00:07.091648   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a ...
	I1026 01:00:07.091676   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a: {Name:mk79ee9c8c68f427992ae46daac972e5a80d39e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091862   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a ...
	I1026 01:00:07.091878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a: {Name:mk0161ea9da0d9d1941870c52b97be187bff2c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091976   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:07.092075   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:07.092130   27934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:07.092145   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt with IP's: []
	I1026 01:00:07.288723   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt ...
	I1026 01:00:07.288754   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt: {Name:mka585c80540dcf4447ce80873c4b4204a6ac833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.288941   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key ...
	I1026 01:00:07.288955   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key: {Name:mk2a46d0d0037729eebdc4ee5998eb5ddbae3abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.289048   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:07.289071   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:07.289091   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:07.289110   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:07.289128   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:07.289145   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:07.289157   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:07.289174   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:07.289238   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:07.289301   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:07.289321   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:07.289357   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:07.289389   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:07.289437   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:07.289497   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:07.289533   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.289554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.289572   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.290185   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:07.315249   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:07.338589   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:07.361991   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:07.385798   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:00:07.409069   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:00:07.431845   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:07.454880   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:07.477392   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:07.500857   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:07.523684   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:07.546154   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:00:07.562082   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:07.567710   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:07.578511   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582871   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582924   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.588401   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:07.601567   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:07.628525   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634748   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634819   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.643756   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:07.657734   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:07.668305   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672451   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672508   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.677939   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:07.688219   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:07.691924   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:07.691988   27934 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:07.692059   27934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:00:07.692137   27934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:00:07.731345   27934 cri.go:89] found id: ""
	I1026 01:00:07.731417   27934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:00:07.741208   27934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:00:07.750623   27934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:00:07.760311   27934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:00:07.760340   27934 kubeadm.go:157] found existing configuration files:
	
	I1026 01:00:07.760383   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:00:07.769207   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:00:07.769267   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:00:07.778578   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:00:07.787579   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:00:07.787661   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:00:07.797042   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.805955   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:00:07.806016   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.815274   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:00:07.824206   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:00:07.824269   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:00:07.833410   27934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:00:07.938802   27934 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 01:00:07.938923   27934 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:00:08.028635   27934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:00:08.028791   27934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:00:08.028932   27934 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 01:00:08.038844   27934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:00:08.041881   27934 out.go:235]   - Generating certificates and keys ...
	I1026 01:00:08.042903   27934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:00:08.042973   27934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:00:08.315204   27934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:00:08.725495   27934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:00:08.806960   27934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:00:08.984098   27934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:00:09.149484   27934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:00:09.149653   27934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.309448   27934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:00:09.309592   27934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.556294   27934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:00:09.712766   27934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:00:10.018193   27934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:00:10.018258   27934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:00:10.257230   27934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:00:10.645833   27934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 01:00:10.887377   27934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:00:11.179208   27934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:00:11.353056   27934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:00:11.353655   27934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:00:11.356992   27934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:00:11.358796   27934 out.go:235]   - Booting up control plane ...
	I1026 01:00:11.358907   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:00:11.358983   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:00:11.359320   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:00:11.375691   27934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:00:11.384224   27934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:00:11.384282   27934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:00:11.520735   27934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 01:00:11.520904   27934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 01:00:12.022375   27934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.622573ms
	I1026 01:00:12.022456   27934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 01:00:18.050317   27934 kubeadm.go:310] [api-check] The API server is healthy after 6.027294666s
	I1026 01:00:18.065132   27934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:00:18.091049   27934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:00:18.625277   27934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:00:18.625502   27934 kubeadm.go:310] [mark-control-plane] Marking the node ha-300623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:00:18.641286   27934 kubeadm.go:310] [bootstrap-token] Using token: 0x0agx.12z45ob3hq7so0d8
	I1026 01:00:18.642941   27934 out.go:235]   - Configuring RBAC rules ...
	I1026 01:00:18.643084   27934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:00:18.651507   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:00:18.661575   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:00:18.665545   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:00:18.669512   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:00:18.677272   27934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:00:18.691190   27934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:00:18.958591   27934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 01:00:19.464064   27934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 01:00:19.464088   27934 kubeadm.go:310] 
	I1026 01:00:19.464204   27934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 01:00:19.464225   27934 kubeadm.go:310] 
	I1026 01:00:19.464365   27934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 01:00:19.464377   27934 kubeadm.go:310] 
	I1026 01:00:19.464406   27934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 01:00:19.464485   27934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:00:19.464567   27934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:00:19.464579   27934 kubeadm.go:310] 
	I1026 01:00:19.464644   27934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 01:00:19.464655   27934 kubeadm.go:310] 
	I1026 01:00:19.464719   27934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:00:19.464726   27934 kubeadm.go:310] 
	I1026 01:00:19.464814   27934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 01:00:19.464930   27934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:00:19.465024   27934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:00:19.465033   27934 kubeadm.go:310] 
	I1026 01:00:19.465247   27934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:00:19.465347   27934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 01:00:19.465355   27934 kubeadm.go:310] 
	I1026 01:00:19.465464   27934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.465592   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 01:00:19.465626   27934 kubeadm.go:310] 	--control-plane 
	I1026 01:00:19.465634   27934 kubeadm.go:310] 
	I1026 01:00:19.465757   27934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:00:19.465771   27934 kubeadm.go:310] 
	I1026 01:00:19.465887   27934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.466042   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 01:00:19.466324   27934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:00:19.466354   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:19.466370   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:19.468090   27934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:00:19.469492   27934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:00:19.474603   27934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 01:00:19.474628   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 01:00:19.493103   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:00:19.838794   27934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:00:19.838909   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:19.838923   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623 minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=true
	I1026 01:00:19.860886   27934 ops.go:34] apiserver oom_adj: -16
	I1026 01:00:19.991866   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.492140   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.992964   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.492707   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.992237   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.491957   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.992426   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.492181   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.615897   27934 kubeadm.go:1113] duration metric: took 3.777077904s to wait for elevateKubeSystemPrivileges
	I1026 01:00:23.615938   27934 kubeadm.go:394] duration metric: took 15.923953549s to StartCluster
	I1026 01:00:23.615966   27934 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.616076   27934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.616984   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.617268   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:00:23.617267   27934 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:23.617376   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:00:23.617295   27934 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:00:23.617401   27934 addons.go:69] Setting storage-provisioner=true in profile "ha-300623"
	I1026 01:00:23.617447   27934 addons.go:234] Setting addon storage-provisioner=true in "ha-300623"
	I1026 01:00:23.617472   27934 addons.go:69] Setting default-storageclass=true in profile "ha-300623"
	I1026 01:00:23.617485   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.617498   27934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-300623"
	I1026 01:00:23.617505   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:23.617969   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618010   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.618031   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618073   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.633825   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I1026 01:00:23.633917   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1026 01:00:23.634401   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634846   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634864   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.634968   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634988   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.635198   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635332   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635386   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.635834   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.635876   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.637603   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.637812   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:00:23.638218   27934 cert_rotation.go:140] Starting client certificate rotation controller
	I1026 01:00:23.638343   27934 addons.go:234] Setting addon default-storageclass=true in "ha-300623"
	I1026 01:00:23.638387   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.638626   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.638653   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.651480   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I1026 01:00:23.651965   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.652480   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.652510   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.652799   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.652991   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.653021   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I1026 01:00:23.654147   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.654693   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.654718   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.654832   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.655239   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.655791   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.655841   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.656920   27934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:00:23.658814   27934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.658834   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:00:23.658853   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.662101   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662598   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.662632   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662848   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.663049   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.663200   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.663316   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.671976   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 01:00:23.672433   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.672925   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.672950   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.673249   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.673483   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.675058   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.675265   27934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:23.675282   27934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:00:23.675298   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.678185   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678589   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.678611   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678792   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.678957   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.679108   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.679249   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.762178   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:00:23.824448   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.874821   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:24.116804   27934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 01:00:24.301862   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301884   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.301919   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301937   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302185   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302194   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302193   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302200   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302221   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302229   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302239   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302246   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302447   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302464   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302531   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302526   27934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 01:00:24.302571   27934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 01:00:24.302606   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302631   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302680   27934 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1026 01:00:24.302699   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.302706   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.302710   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315108   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:00:24.315658   27934 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1026 01:00:24.315672   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.315679   27934 round_trippers.go:473]     Content-Type: application/json
	I1026 01:00:24.315683   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315686   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.318571   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:00:24.318791   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.318805   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.319072   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.319089   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.319093   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.321441   27934 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:00:24.323036   27934 addons.go:510] duration metric: took 705.743688ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 01:00:24.323074   27934 start.go:246] waiting for cluster config update ...
	I1026 01:00:24.323088   27934 start.go:255] writing updated cluster config ...
	I1026 01:00:24.324580   27934 out.go:201] 
	I1026 01:00:24.325800   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:24.325876   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.327345   27934 out.go:177] * Starting "ha-300623-m02" control-plane node in "ha-300623" cluster
	I1026 01:00:24.329009   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:24.329028   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:00:24.329124   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:00:24.329138   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:00:24.329209   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.329375   27934 start.go:360] acquireMachinesLock for ha-300623-m02: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:00:24.329429   27934 start.go:364] duration metric: took 35.088µs to acquireMachinesLock for "ha-300623-m02"
	I1026 01:00:24.329452   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:24.329544   27934 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1026 01:00:24.330943   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:00:24.331025   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:24.331057   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:24.345495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I1026 01:00:24.346002   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:24.346476   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:24.346491   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:24.346765   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:24.346970   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:24.347113   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:24.347293   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:00:24.347323   27934 client.go:168] LocalClient.Create starting
	I1026 01:00:24.347359   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:00:24.347400   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347421   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347493   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:00:24.347519   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347536   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347559   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:00:24.347568   27934 main.go:141] libmachine: (ha-300623-m02) Calling .PreCreateCheck
	I1026 01:00:24.347721   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:24.348120   27934 main.go:141] libmachine: Creating machine...
	I1026 01:00:24.348135   27934 main.go:141] libmachine: (ha-300623-m02) Calling .Create
	I1026 01:00:24.348260   27934 main.go:141] libmachine: (ha-300623-m02) Creating KVM machine...
	I1026 01:00:24.349505   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing default KVM network
	I1026 01:00:24.349630   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing private KVM network mk-ha-300623
	I1026 01:00:24.349770   27934 main.go:141] libmachine: (ha-300623-m02) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.349806   27934 main.go:141] libmachine: (ha-300623-m02) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:00:24.349877   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.349757   28306 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.349949   27934 main.go:141] libmachine: (ha-300623-m02) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:00:24.581858   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.581729   28306 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa...
	I1026 01:00:24.824457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824338   28306 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk...
	I1026 01:00:24.824488   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing magic tar header
	I1026 01:00:24.824501   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing SSH key tar header
	I1026 01:00:24.824514   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824445   28306 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.824563   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02
	I1026 01:00:24.824601   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:00:24.824632   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.824643   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 (perms=drwx------)
	I1026 01:00:24.824650   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:00:24.824656   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:00:24.824665   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:00:24.824671   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:00:24.824679   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:00:24.824685   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:24.824694   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:00:24.824702   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:00:24.824707   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:00:24.824717   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home
	I1026 01:00:24.824748   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Skipping /home - not owner
	I1026 01:00:24.825705   27934 main.go:141] libmachine: (ha-300623-m02) define libvirt domain using xml: 
	I1026 01:00:24.825725   27934 main.go:141] libmachine: (ha-300623-m02) <domain type='kvm'>
	I1026 01:00:24.825740   27934 main.go:141] libmachine: (ha-300623-m02)   <name>ha-300623-m02</name>
	I1026 01:00:24.825751   27934 main.go:141] libmachine: (ha-300623-m02)   <memory unit='MiB'>2200</memory>
	I1026 01:00:24.825760   27934 main.go:141] libmachine: (ha-300623-m02)   <vcpu>2</vcpu>
	I1026 01:00:24.825769   27934 main.go:141] libmachine: (ha-300623-m02)   <features>
	I1026 01:00:24.825777   27934 main.go:141] libmachine: (ha-300623-m02)     <acpi/>
	I1026 01:00:24.825786   27934 main.go:141] libmachine: (ha-300623-m02)     <apic/>
	I1026 01:00:24.825807   27934 main.go:141] libmachine: (ha-300623-m02)     <pae/>
	I1026 01:00:24.825825   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.825837   27934 main.go:141] libmachine: (ha-300623-m02)   </features>
	I1026 01:00:24.825845   27934 main.go:141] libmachine: (ha-300623-m02)   <cpu mode='host-passthrough'>
	I1026 01:00:24.825850   27934 main.go:141] libmachine: (ha-300623-m02)   
	I1026 01:00:24.825856   27934 main.go:141] libmachine: (ha-300623-m02)   </cpu>
	I1026 01:00:24.825861   27934 main.go:141] libmachine: (ha-300623-m02)   <os>
	I1026 01:00:24.825868   27934 main.go:141] libmachine: (ha-300623-m02)     <type>hvm</type>
	I1026 01:00:24.825873   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='cdrom'/>
	I1026 01:00:24.825880   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='hd'/>
	I1026 01:00:24.825888   27934 main.go:141] libmachine: (ha-300623-m02)     <bootmenu enable='no'/>
	I1026 01:00:24.825901   27934 main.go:141] libmachine: (ha-300623-m02)   </os>
	I1026 01:00:24.825911   27934 main.go:141] libmachine: (ha-300623-m02)   <devices>
	I1026 01:00:24.825922   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='cdrom'>
	I1026 01:00:24.825934   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/boot2docker.iso'/>
	I1026 01:00:24.825942   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hdc' bus='scsi'/>
	I1026 01:00:24.825947   27934 main.go:141] libmachine: (ha-300623-m02)       <readonly/>
	I1026 01:00:24.825955   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.825960   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='disk'>
	I1026 01:00:24.825967   27934 main.go:141] libmachine: (ha-300623-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:00:24.825975   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk'/>
	I1026 01:00:24.825984   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hda' bus='virtio'/>
	I1026 01:00:24.825991   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.826012   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826033   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='mk-ha-300623'/>
	I1026 01:00:24.826045   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826054   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826063   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826074   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='default'/>
	I1026 01:00:24.826082   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826091   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826098   27934 main.go:141] libmachine: (ha-300623-m02)     <serial type='pty'>
	I1026 01:00:24.826107   27934 main.go:141] libmachine: (ha-300623-m02)       <target port='0'/>
	I1026 01:00:24.826112   27934 main.go:141] libmachine: (ha-300623-m02)     </serial>
	I1026 01:00:24.826119   27934 main.go:141] libmachine: (ha-300623-m02)     <console type='pty'>
	I1026 01:00:24.826136   27934 main.go:141] libmachine: (ha-300623-m02)       <target type='serial' port='0'/>
	I1026 01:00:24.826153   27934 main.go:141] libmachine: (ha-300623-m02)     </console>
	I1026 01:00:24.826166   27934 main.go:141] libmachine: (ha-300623-m02)     <rng model='virtio'>
	I1026 01:00:24.826178   27934 main.go:141] libmachine: (ha-300623-m02)       <backend model='random'>/dev/random</backend>
	I1026 01:00:24.826187   27934 main.go:141] libmachine: (ha-300623-m02)     </rng>
	I1026 01:00:24.826194   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826201   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826210   27934 main.go:141] libmachine: (ha-300623-m02)   </devices>
	I1026 01:00:24.826218   27934 main.go:141] libmachine: (ha-300623-m02) </domain>
	I1026 01:00:24.826230   27934 main.go:141] libmachine: (ha-300623-m02) 
	I1026 01:00:24.834328   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:19:9b:85 in network default
	I1026 01:00:24.834898   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring networks are active...
	I1026 01:00:24.834921   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:24.835679   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network default is active
	I1026 01:00:24.836033   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network mk-ha-300623 is active
	I1026 01:00:24.836422   27934 main.go:141] libmachine: (ha-300623-m02) Getting domain xml...
	I1026 01:00:24.837184   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:26.123801   27934 main.go:141] libmachine: (ha-300623-m02) Waiting to get IP...
	I1026 01:00:26.124786   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.125171   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.125213   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.125161   28306 retry.go:31] will retry after 239.473798ms: waiting for machine to come up
	I1026 01:00:26.366497   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.367035   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.367063   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.366991   28306 retry.go:31] will retry after 247.775109ms: waiting for machine to come up
	I1026 01:00:26.616299   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.616749   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.616770   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.616730   28306 retry.go:31] will retry after 304.793231ms: waiting for machine to come up
	I1026 01:00:26.923149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.923677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.923696   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.923618   28306 retry.go:31] will retry after 501.966284ms: waiting for machine to come up
	I1026 01:00:27.427149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.427595   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.427620   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.427557   28306 retry.go:31] will retry after 462.793286ms: waiting for machine to come up
	I1026 01:00:27.892113   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.892649   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.892674   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.892601   28306 retry.go:31] will retry after 627.280628ms: waiting for machine to come up
	I1026 01:00:28.521634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:28.522118   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:28.522154   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:28.522059   28306 retry.go:31] will retry after 1.043043357s: waiting for machine to come up
	I1026 01:00:29.566267   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:29.566670   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:29.566697   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:29.566641   28306 retry.go:31] will retry after 925.497125ms: waiting for machine to come up
	I1026 01:00:30.493367   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:30.493801   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:30.493826   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:30.493760   28306 retry.go:31] will retry after 1.604522192s: waiting for machine to come up
	I1026 01:00:32.100432   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:32.100961   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:32.100982   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:32.100919   28306 retry.go:31] will retry after 2.197958234s: waiting for machine to come up
	I1026 01:00:34.301338   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:34.301864   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:34.301891   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:34.301813   28306 retry.go:31] will retry after 1.917554174s: waiting for machine to come up
	I1026 01:00:36.221440   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:36.221869   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:36.221888   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:36.221830   28306 retry.go:31] will retry after 3.272341592s: waiting for machine to come up
	I1026 01:00:39.496057   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:39.496525   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:39.496555   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:39.496473   28306 retry.go:31] will retry after 3.688097346s: waiting for machine to come up
	I1026 01:00:43.186914   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:43.187251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:43.187284   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:43.187241   28306 retry.go:31] will retry after 5.370855346s: waiting for machine to come up
	I1026 01:00:48.563319   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563799   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563826   27934 main.go:141] libmachine: (ha-300623-m02) Found IP for machine: 192.168.39.62
	I1026 01:00:48.563869   27934 main.go:141] libmachine: (ha-300623-m02) Reserving static IP address...
	I1026 01:00:48.564263   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find host DHCP lease matching {name: "ha-300623-m02", mac: "52:54:00:eb:f2:95", ip: "192.168.39.62"} in network mk-ha-300623
	I1026 01:00:48.642625   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Getting to WaitForSSH function...
	I1026 01:00:48.642658   27934 main.go:141] libmachine: (ha-300623-m02) Reserved static IP address: 192.168.39.62
	I1026 01:00:48.642673   27934 main.go:141] libmachine: (ha-300623-m02) Waiting for SSH to be available...
	I1026 01:00:48.645214   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645726   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.645751   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645908   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH client type: external
	I1026 01:00:48.645957   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa (-rw-------)
	I1026 01:00:48.645990   27934 main.go:141] libmachine: (ha-300623-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:48.646004   27934 main.go:141] libmachine: (ha-300623-m02) DBG | About to run SSH command:
	I1026 01:00:48.646022   27934 main.go:141] libmachine: (ha-300623-m02) DBG | exit 0
	I1026 01:00:48.773437   27934 main.go:141] libmachine: (ha-300623-m02) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:48.773671   27934 main.go:141] libmachine: (ha-300623-m02) KVM machine creation complete!
	I1026 01:00:48.773985   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:48.774531   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774718   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774839   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:48.774863   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetState
	I1026 01:00:48.776153   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:48.776168   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:48.776176   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:48.776184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.778481   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778857   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.778884   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778991   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.779164   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779300   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779402   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.779538   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.779788   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.779807   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:48.896727   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:48.896751   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:48.896762   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.899398   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899741   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.899779   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899885   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.900047   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900289   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.900414   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.900617   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.900631   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:49.017846   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:49.017965   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:49.017981   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:49.017993   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018219   27934 buildroot.go:166] provisioning hostname "ha-300623-m02"
	I1026 01:00:49.018266   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018441   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.021311   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022133   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.022168   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022362   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.022542   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022691   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022833   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.022971   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.023157   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.023181   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m02 && echo "ha-300623-m02" | sudo tee /etc/hostname
	I1026 01:00:49.154863   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m02
	
	I1026 01:00:49.154891   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.157409   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.157924   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.157965   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.158127   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.158313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158463   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158583   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.158721   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.158874   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.158890   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:49.281279   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:49.281312   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:49.281349   27934 buildroot.go:174] setting up certificates
	I1026 01:00:49.281361   27934 provision.go:84] configureAuth start
	I1026 01:00:49.281370   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.281641   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.284261   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.284660   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284785   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.286954   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287298   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.287326   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287470   27934 provision.go:143] copyHostCerts
	I1026 01:00:49.287501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287544   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:49.287555   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287640   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:49.287745   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287775   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:49.287788   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287835   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:49.287908   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287934   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:49.287941   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287990   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:49.288059   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m02 san=[127.0.0.1 192.168.39.62 ha-300623-m02 localhost minikube]
	I1026 01:00:49.407467   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:49.407520   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:49.407552   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.410082   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410436   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.410457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410696   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.410880   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.411041   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.411166   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.495389   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:49.495471   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:49.520501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:49.520571   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:00:49.544170   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:49.544266   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:00:49.567939   27934 provision.go:87] duration metric: took 286.565797ms to configureAuth
	I1026 01:00:49.567967   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:49.568139   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:49.568207   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.570619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.570975   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.571000   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.571206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.571396   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571565   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571706   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.571875   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.572093   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.572115   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:49.802107   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:49.802136   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:49.802143   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetURL
	I1026 01:00:49.803331   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using libvirt version 6000000
	I1026 01:00:49.805234   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805565   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.805594   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805716   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:49.805729   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:49.805746   27934 client.go:171] duration metric: took 25.458413075s to LocalClient.Create
	I1026 01:00:49.805769   27934 start.go:167] duration metric: took 25.45847781s to libmachine.API.Create "ha-300623"
	I1026 01:00:49.805779   27934 start.go:293] postStartSetup for "ha-300623-m02" (driver="kvm2")
	I1026 01:00:49.805791   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:49.805808   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:49.806042   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:49.806065   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.808068   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808407   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.808434   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808582   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.808773   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.808963   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.809100   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.895521   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:49.899409   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:49.899435   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:49.899514   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:49.899627   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:49.899639   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:49.899762   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:49.908849   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:49.931119   27934 start.go:296] duration metric: took 125.326962ms for postStartSetup
	I1026 01:00:49.931168   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:49.931760   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.934318   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934656   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.934677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934971   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:49.935199   27934 start.go:128] duration metric: took 25.605643958s to createHost
	I1026 01:00:49.935242   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.937348   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937642   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.937668   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937766   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.937916   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938069   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938232   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.938387   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.938577   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.938589   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:50.054126   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904450.033939767
	
	I1026 01:00:50.054149   27934 fix.go:216] guest clock: 1729904450.033939767
	I1026 01:00:50.054158   27934 fix.go:229] Guest: 2024-10-26 01:00:50.033939767 +0000 UTC Remote: 2024-10-26 01:00:49.935212743 +0000 UTC m=+68.870574304 (delta=98.727024ms)
	I1026 01:00:50.054179   27934 fix.go:200] guest clock delta is within tolerance: 98.727024ms
	I1026 01:00:50.054185   27934 start.go:83] releasing machines lock for "ha-300623-m02", held for 25.72474455s
	I1026 01:00:50.054206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.054478   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:50.057251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.057634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.057666   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.060016   27934 out.go:177] * Found network options:
	I1026 01:00:50.061125   27934 out.go:177]   - NO_PROXY=192.168.39.183
	W1026 01:00:50.062183   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.062255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062824   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062979   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.063068   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:50.063107   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	W1026 01:00:50.063196   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.063287   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:50.063313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:50.065732   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.065764   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066105   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066132   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066157   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066172   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066343   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066466   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066529   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066613   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066757   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.066776   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066891   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.300821   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:50.306327   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:50.306383   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:50.322223   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:50.322250   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:50.322315   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:50.338468   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:50.351846   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:50.351912   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:50.366331   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:50.380253   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:50.506965   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:50.668001   27934 docker.go:233] disabling docker service ...
	I1026 01:00:50.668069   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:50.682592   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:50.695962   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:50.824939   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:50.938022   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:50.952273   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:50.970167   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:50.970223   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.980486   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:50.980547   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.991006   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.001215   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.011378   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:51.021477   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.031248   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.047066   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.056669   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:51.065644   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:51.065713   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:51.077591   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:51.086612   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:51.190831   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:51.272466   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:51.272541   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:51.277536   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:51.277595   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:51.281084   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:51.316243   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:51.316339   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.344007   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.373231   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:51.374904   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:00:51.375971   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:51.378647   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.378955   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:51.378984   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.379181   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:51.383229   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:51.395396   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:00:51.395665   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:51.395979   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.396021   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.411495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I1026 01:00:51.412012   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.412465   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.412492   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.412809   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.413020   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:51.414616   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:51.414900   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.414943   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.429345   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1026 01:00:51.429857   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.430394   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.430414   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.430718   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.430932   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:51.431063   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.62
	I1026 01:00:51.431072   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:51.431085   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.431231   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:51.431297   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:51.431310   27934 certs.go:256] generating profile certs ...
	I1026 01:00:51.431379   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:51.431404   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab
	I1026 01:00:51.431417   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.254]
	I1026 01:00:51.551653   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab ...
	I1026 01:00:51.551682   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab: {Name:mk7f84df361678f6c264c35c7a54837d967e14ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551843   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab ...
	I1026 01:00:51.551855   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab: {Name:mkd389918e7eb8b1c88d8cee260e577971075312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551931   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:51.552066   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:51.552188   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:51.552202   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:51.552214   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:51.552227   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:51.552240   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:51.552251   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:51.552262   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:51.552275   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:51.552287   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:51.552335   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:51.552366   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:51.552375   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:51.552397   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:51.552420   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:51.552441   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:51.552479   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:51.552504   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:51.552517   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:51.552529   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:51.552559   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:51.555385   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555741   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:51.555776   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555946   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:51.556121   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:51.556266   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:51.556384   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:51.633868   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:00:51.638556   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:00:51.651311   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:00:51.655533   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:00:51.667970   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:00:51.671912   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:00:51.681736   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:00:51.685589   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:00:51.695314   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:00:51.699011   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:00:51.709409   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:00:51.713200   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:00:51.722473   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:51.745687   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:51.767846   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:51.789516   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:51.811259   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:00:51.833028   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:00:51.856110   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:51.879410   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:51.905258   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:51.929159   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:51.951850   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:51.976197   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:00:51.991793   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:00:52.007237   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:00:52.023097   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:00:52.038541   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:00:52.053670   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:00:52.068858   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:00:52.084534   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:52.089743   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:52.099587   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103529   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103574   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.108773   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:52.118562   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:52.128439   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132388   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132437   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.137609   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:52.147519   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:52.157786   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162186   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162230   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.167650   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:52.179201   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:52.183712   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:52.183765   27934 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.31.2 crio true true} ...
	I1026 01:00:52.183873   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:52.183908   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:52.183953   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:52.201496   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:52.201565   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:52.201625   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.212390   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:00:52.212439   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.223416   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:00:52.223436   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223483   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223536   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1026 01:00:52.223555   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1026 01:00:52.227638   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:00:52.227662   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:00:53.105621   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.105715   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.110408   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:00:53.110445   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:00:53.233007   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:00:53.274448   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.274566   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.294441   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:00:53.294487   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:00:53.654866   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:00:53.664222   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 01:00:53.679840   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:53.695653   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:00:53.711652   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:53.715553   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:53.727360   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:53.853122   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:53.869765   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:53.870266   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:53.870326   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:53.886042   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1026 01:00:53.886641   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:53.887219   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:53.887243   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:53.887613   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:53.887814   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:53.887974   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:53.888094   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:00:53.888116   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:53.891569   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892007   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:53.892034   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892213   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:53.892359   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:53.892504   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:53.892700   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:54.059992   27934 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:54.060032   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I1026 01:01:15.752497   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (21.692442996s)
	I1026 01:01:15.752534   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:01:16.303360   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m02 minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:01:16.453258   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:01:16.592863   27934 start.go:319] duration metric: took 22.704885851s to joinCluster
	I1026 01:01:16.592954   27934 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:16.593288   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:16.594650   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:01:16.596091   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:01:16.850259   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:01:16.885786   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:01:16.886030   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:01:16.886096   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:01:16.886309   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:16.886394   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:16.886406   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:16.886416   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:16.886421   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:16.901951   27934 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1026 01:01:17.386830   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.386852   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.386859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.386867   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.391117   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:17.886726   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.886752   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.886769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.886774   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.891812   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:18.386816   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.386836   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.386844   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.386849   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.389277   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:18.887322   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.887345   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.887354   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.887359   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.890950   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:18.891497   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:19.386717   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.386741   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.386752   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.386757   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.389841   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:19.886538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.886562   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.886569   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.886573   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.889883   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.386728   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.386753   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.386764   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.386770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.392483   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:20.887438   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.887464   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.887474   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.887480   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.891169   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.891590   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:21.386734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.386758   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.386770   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.386778   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.389970   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:21.886824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.886849   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.886859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.886865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.891560   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.386652   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.386674   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.386682   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.386686   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.391520   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.887482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.887508   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.887524   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.887529   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.891155   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:22.891643   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:23.387538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.387567   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.387578   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.387585   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.390499   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:23.886601   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.886627   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.886637   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.886647   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.890054   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.387524   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.387553   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.387564   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.387570   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.390618   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.886521   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.886550   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.886561   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.886567   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.889985   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.386794   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.386822   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.386831   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.386838   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.390108   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.390691   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:25.887094   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.887116   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.887124   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.887128   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.890067   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:26.387517   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.387537   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.387545   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.387550   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.391065   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:26.886664   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.886688   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.886698   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.886703   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.889958   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.386821   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.386850   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.386860   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.386865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.389901   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.886863   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.886892   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.886901   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.886904   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.890223   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.890712   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:28.387256   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.387286   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.387297   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.387304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.391313   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:28.887398   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.887423   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.887431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.887435   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.891415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.387299   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.387320   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.387328   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.387333   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.394125   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:01:29.886896   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.886918   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.886926   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.886928   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.890460   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.891101   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:30.386473   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.386494   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.386505   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.386512   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.389574   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:30.886604   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.886631   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.886640   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.886644   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.890190   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.386924   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.386949   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.386959   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.386966   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.399297   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:01:31.887213   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.887236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.887243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.887250   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.890605   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.891200   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:32.386487   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.386513   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.386523   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.386530   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.389962   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:32.886975   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.887003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.887016   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.887021   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.890088   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.386916   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.386938   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.386946   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.386950   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.390776   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.886708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.886731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.886742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.886747   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.890420   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.890962   27934 node_ready.go:49] node "ha-300623-m02" has status "Ready":"True"
	I1026 01:01:33.890985   27934 node_ready.go:38] duration metric: took 17.004659759s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:33.890996   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:33.891090   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:33.891103   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.891113   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.891118   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.895593   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:33.901510   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.901584   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:01:33.901593   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.901599   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.901603   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.904838   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.905632   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.905646   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.905653   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.905662   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.908670   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.909108   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.909125   27934 pod_ready.go:82] duration metric: took 7.593244ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909134   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909228   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:01:33.909236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.909243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.909246   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.911623   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.912324   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.912342   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.912351   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.912356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.914836   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.915526   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.915582   27934 pod_ready.go:82] duration metric: took 6.44095ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915619   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:01:33.915720   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.915730   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.915737   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.918774   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.919308   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.919323   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.919332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.919337   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.921541   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.921916   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.921932   27934 pod_ready.go:82] duration metric: took 6.293574ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921944   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921993   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:01:33.922003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.922013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.922020   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.924042   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.924574   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.924592   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.924620   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.924630   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.926627   27934 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:01:33.927009   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.927026   27934 pod_ready.go:82] duration metric: took 5.07473ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.927043   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.087429   27934 request.go:632] Waited for 160.309698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087488   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087496   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.087507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.087517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.093218   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:34.287260   27934 request.go:632] Waited for 193.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287335   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287346   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.287356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.287367   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.290680   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.291257   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.291280   27934 pod_ready.go:82] duration metric: took 364.229033ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.291293   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.487643   27934 request.go:632] Waited for 196.274187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487743   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487757   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.487769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.487776   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.490314   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:34.687266   27934 request.go:632] Waited for 196.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687319   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687325   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.687332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.687336   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.690681   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.691098   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.691116   27934 pod_ready.go:82] duration metric: took 399.816191ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.691125   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.887235   27934 request.go:632] Waited for 196.048043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887286   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887292   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.887299   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.887304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.890298   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.087251   27934 request.go:632] Waited for 196.393455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087304   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087311   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.087320   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.087327   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.096042   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:01:35.096481   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.096497   27934 pod_ready.go:82] duration metric: took 405.365113ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.096507   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.287575   27934 request.go:632] Waited for 190.95439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287635   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287641   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.287656   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.287664   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.290956   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.486850   27934 request.go:632] Waited for 195.295178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486901   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486907   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.486914   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.486918   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.489791   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.490490   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.490509   27934 pod_ready.go:82] duration metric: took 393.992807ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.490519   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.687677   27934 request.go:632] Waited for 197.085878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687739   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.687747   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.687751   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.690861   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.886824   27934 request.go:632] Waited for 195.303807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886902   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886908   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.886915   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.886919   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.890003   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.890588   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.890610   27934 pod_ready.go:82] duration metric: took 400.083533ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.890620   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.087724   27934 request.go:632] Waited for 197.035019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087799   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087807   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.087817   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.087823   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.090987   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.287060   27934 request.go:632] Waited for 195.34906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287112   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287118   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.287126   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.287130   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.290355   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.290978   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.291000   27934 pod_ready.go:82] duration metric: took 400.372479ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.291014   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.486971   27934 request.go:632] Waited for 195.883358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487050   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487059   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.487068   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.487073   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.491124   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:36.686937   27934 request.go:632] Waited for 195.292838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686992   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686998   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.687005   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.687009   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.689912   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:36.690462   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.690479   27934 pod_ready.go:82] duration metric: took 399.458178ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.690490   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.887645   27934 request.go:632] Waited for 197.093805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887721   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.887742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.887752   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.892972   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:37.086834   27934 request.go:632] Waited for 193.310036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086917   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086924   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.086935   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.086940   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.091462   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.091914   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:37.091933   27934 pod_ready.go:82] duration metric: took 401.437262ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:37.091944   27934 pod_ready.go:39] duration metric: took 3.20092896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:37.091963   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:01:37.092013   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:01:37.107184   27934 api_server.go:72] duration metric: took 20.514182215s to wait for apiserver process to appear ...
	I1026 01:01:37.107232   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:01:37.107251   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:01:37.112416   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:01:37.112504   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:01:37.112517   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.112528   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.112539   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.113540   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:01:37.113668   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:01:37.113698   27934 api_server.go:131] duration metric: took 6.458284ms to wait for apiserver health ...
	I1026 01:01:37.113710   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:01:37.287117   27934 request.go:632] Waited for 173.325695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287206   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287218   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.287229   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.287237   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.291660   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.296191   27934 system_pods.go:59] 17 kube-system pods found
	I1026 01:01:37.296219   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.296224   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.296228   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.296232   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.296235   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.296238   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.296241   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.296244   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.296248   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.296251   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.296254   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.296257   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.296260   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.296263   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.296266   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.296269   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.296272   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.296277   27934 system_pods.go:74] duration metric: took 182.559653ms to wait for pod list to return data ...
	I1026 01:01:37.296287   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:01:37.487718   27934 request.go:632] Waited for 191.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487776   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.487783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.487787   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.491586   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.491857   27934 default_sa.go:45] found service account: "default"
	I1026 01:01:37.491878   27934 default_sa.go:55] duration metric: took 195.585476ms for default service account to be created ...
	I1026 01:01:37.491887   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:01:37.687316   27934 request.go:632] Waited for 195.344627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687371   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687376   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.687383   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.687387   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.691369   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.696949   27934 system_pods.go:86] 17 kube-system pods found
	I1026 01:01:37.696973   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.696979   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.696983   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.696988   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.696991   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.696995   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.696999   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.697003   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.697006   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.697010   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.697014   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.697018   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.697021   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.697028   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.697031   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.697034   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.697036   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.697042   27934 system_pods.go:126] duration metric: took 205.150542ms to wait for k8s-apps to be running ...
	I1026 01:01:37.697052   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:01:37.697091   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:01:37.712370   27934 system_svc.go:56] duration metric: took 15.306195ms WaitForService to wait for kubelet
	I1026 01:01:37.712402   27934 kubeadm.go:582] duration metric: took 21.119406025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:01:37.712420   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:01:37.886735   27934 request.go:632] Waited for 174.248578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886856   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886868   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.886878   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.886887   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.890795   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.891473   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891497   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891509   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891513   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891517   27934 node_conditions.go:105] duration metric: took 179.092926ms to run NodePressure ...
	I1026 01:01:37.891528   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:01:37.891553   27934 start.go:255] writing updated cluster config ...
	I1026 01:01:37.893974   27934 out.go:201] 
	I1026 01:01:37.895579   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:37.895693   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.897785   27934 out.go:177] * Starting "ha-300623-m03" control-plane node in "ha-300623" cluster
	I1026 01:01:37.898981   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:01:37.899006   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:01:37.899114   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:01:37.899125   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:01:37.899210   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.900601   27934 start.go:360] acquireMachinesLock for ha-300623-m03: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:01:37.900662   27934 start.go:364] duration metric: took 37.924µs to acquireMachinesLock for "ha-300623-m03"
	I1026 01:01:37.900681   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:37.900777   27934 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1026 01:01:37.902482   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:01:37.902577   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:01:37.902616   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:01:37.917489   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1026 01:01:37.918010   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:01:37.918524   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:01:37.918546   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:01:37.918854   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:01:37.919023   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:01:37.919164   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:01:37.919300   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:01:37.919332   27934 client.go:168] LocalClient.Create starting
	I1026 01:01:37.919365   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:01:37.919401   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919415   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919461   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:01:37.919481   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919492   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919511   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:01:37.919519   27934 main.go:141] libmachine: (ha-300623-m03) Calling .PreCreateCheck
	I1026 01:01:37.919665   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:01:37.920059   27934 main.go:141] libmachine: Creating machine...
	I1026 01:01:37.920075   27934 main.go:141] libmachine: (ha-300623-m03) Calling .Create
	I1026 01:01:37.920211   27934 main.go:141] libmachine: (ha-300623-m03) Creating KVM machine...
	I1026 01:01:37.921465   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing default KVM network
	I1026 01:01:37.921611   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing private KVM network mk-ha-300623
	I1026 01:01:37.921761   27934 main.go:141] libmachine: (ha-300623-m03) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:37.921786   27934 main.go:141] libmachine: (ha-300623-m03) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:01:37.921849   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:37.921742   28699 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:37.921948   27934 main.go:141] libmachine: (ha-300623-m03) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:01:38.168295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.168154   28699 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa...
	I1026 01:01:38.291085   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.290967   28699 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk...
	I1026 01:01:38.291115   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing magic tar header
	I1026 01:01:38.291125   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing SSH key tar header
	I1026 01:01:38.291132   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.291098   28699 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:38.291249   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03
	I1026 01:01:38.291280   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 (perms=drwx------)
	I1026 01:01:38.291294   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:01:38.291307   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:38.291313   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:01:38.291323   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:01:38.291330   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:01:38.291340   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home
	I1026 01:01:38.291363   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:01:38.291374   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Skipping /home - not owner
	I1026 01:01:38.291387   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:01:38.291395   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:01:38.291403   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:01:38.291411   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:01:38.291417   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:38.292244   27934 main.go:141] libmachine: (ha-300623-m03) define libvirt domain using xml: 
	I1026 01:01:38.292268   27934 main.go:141] libmachine: (ha-300623-m03) <domain type='kvm'>
	I1026 01:01:38.292276   27934 main.go:141] libmachine: (ha-300623-m03)   <name>ha-300623-m03</name>
	I1026 01:01:38.292281   27934 main.go:141] libmachine: (ha-300623-m03)   <memory unit='MiB'>2200</memory>
	I1026 01:01:38.292286   27934 main.go:141] libmachine: (ha-300623-m03)   <vcpu>2</vcpu>
	I1026 01:01:38.292290   27934 main.go:141] libmachine: (ha-300623-m03)   <features>
	I1026 01:01:38.292296   27934 main.go:141] libmachine: (ha-300623-m03)     <acpi/>
	I1026 01:01:38.292303   27934 main.go:141] libmachine: (ha-300623-m03)     <apic/>
	I1026 01:01:38.292314   27934 main.go:141] libmachine: (ha-300623-m03)     <pae/>
	I1026 01:01:38.292320   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292330   27934 main.go:141] libmachine: (ha-300623-m03)   </features>
	I1026 01:01:38.292336   27934 main.go:141] libmachine: (ha-300623-m03)   <cpu mode='host-passthrough'>
	I1026 01:01:38.292375   27934 main.go:141] libmachine: (ha-300623-m03)   
	I1026 01:01:38.292393   27934 main.go:141] libmachine: (ha-300623-m03)   </cpu>
	I1026 01:01:38.292406   27934 main.go:141] libmachine: (ha-300623-m03)   <os>
	I1026 01:01:38.292421   27934 main.go:141] libmachine: (ha-300623-m03)     <type>hvm</type>
	I1026 01:01:38.292439   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='cdrom'/>
	I1026 01:01:38.292484   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='hd'/>
	I1026 01:01:38.292496   27934 main.go:141] libmachine: (ha-300623-m03)     <bootmenu enable='no'/>
	I1026 01:01:38.292505   27934 main.go:141] libmachine: (ha-300623-m03)   </os>
	I1026 01:01:38.292533   27934 main.go:141] libmachine: (ha-300623-m03)   <devices>
	I1026 01:01:38.292552   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='cdrom'>
	I1026 01:01:38.292569   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/boot2docker.iso'/>
	I1026 01:01:38.292579   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hdc' bus='scsi'/>
	I1026 01:01:38.292598   27934 main.go:141] libmachine: (ha-300623-m03)       <readonly/>
	I1026 01:01:38.292607   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292617   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='disk'>
	I1026 01:01:38.292641   27934 main.go:141] libmachine: (ha-300623-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:01:38.292657   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk'/>
	I1026 01:01:38.292667   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hda' bus='virtio'/>
	I1026 01:01:38.292685   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292699   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292713   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='mk-ha-300623'/>
	I1026 01:01:38.292722   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292731   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292740   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292749   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='default'/>
	I1026 01:01:38.292759   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292790   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292812   27934 main.go:141] libmachine: (ha-300623-m03)     <serial type='pty'>
	I1026 01:01:38.292821   27934 main.go:141] libmachine: (ha-300623-m03)       <target port='0'/>
	I1026 01:01:38.292825   27934 main.go:141] libmachine: (ha-300623-m03)     </serial>
	I1026 01:01:38.292832   27934 main.go:141] libmachine: (ha-300623-m03)     <console type='pty'>
	I1026 01:01:38.292837   27934 main.go:141] libmachine: (ha-300623-m03)       <target type='serial' port='0'/>
	I1026 01:01:38.292843   27934 main.go:141] libmachine: (ha-300623-m03)     </console>
	I1026 01:01:38.292851   27934 main.go:141] libmachine: (ha-300623-m03)     <rng model='virtio'>
	I1026 01:01:38.292862   27934 main.go:141] libmachine: (ha-300623-m03)       <backend model='random'>/dev/random</backend>
	I1026 01:01:38.292871   27934 main.go:141] libmachine: (ha-300623-m03)     </rng>
	I1026 01:01:38.292879   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292887   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292907   27934 main.go:141] libmachine: (ha-300623-m03)   </devices>
	I1026 01:01:38.292927   27934 main.go:141] libmachine: (ha-300623-m03) </domain>
	I1026 01:01:38.292944   27934 main.go:141] libmachine: (ha-300623-m03) 
	I1026 01:01:38.300030   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:59:6f:46 in network default
	I1026 01:01:38.300611   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring networks are active...
	I1026 01:01:38.300639   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:38.301325   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network default is active
	I1026 01:01:38.301614   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network mk-ha-300623 is active
	I1026 01:01:38.301965   27934 main.go:141] libmachine: (ha-300623-m03) Getting domain xml...
	I1026 01:01:38.302564   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:39.541523   27934 main.go:141] libmachine: (ha-300623-m03) Waiting to get IP...
	I1026 01:01:39.542453   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.542916   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.542942   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.542887   28699 retry.go:31] will retry after 281.419322ms: waiting for machine to come up
	I1026 01:01:39.826321   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.826750   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.826778   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.826737   28699 retry.go:31] will retry after 326.383367ms: waiting for machine to come up
	I1026 01:01:40.155076   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.155490   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.155515   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.155448   28699 retry.go:31] will retry after 321.43703ms: waiting for machine to come up
	I1026 01:01:40.479066   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.479512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.479541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.479464   28699 retry.go:31] will retry after 585.906236ms: waiting for machine to come up
	I1026 01:01:41.068220   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.068712   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.068740   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.068671   28699 retry.go:31] will retry after 528.538636ms: waiting for machine to come up
	I1026 01:01:41.598430   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.599018   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.599040   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.598979   28699 retry.go:31] will retry after 646.897359ms: waiting for machine to come up
	I1026 01:01:42.247537   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:42.247952   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:42.247977   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:42.247889   28699 retry.go:31] will retry after 982.424553ms: waiting for machine to come up
	I1026 01:01:43.231997   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:43.232498   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:43.232526   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:43.232426   28699 retry.go:31] will retry after 920.160573ms: waiting for machine to come up
	I1026 01:01:44.154517   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:44.155015   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:44.155041   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:44.154974   28699 retry.go:31] will retry after 1.233732499s: waiting for machine to come up
	I1026 01:01:45.390175   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:45.390649   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:45.390676   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:45.390595   28699 retry.go:31] will retry after 2.305424014s: waiting for machine to come up
	I1026 01:01:47.698485   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:47.698913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:47.698936   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:47.698861   28699 retry.go:31] will retry after 2.109217289s: waiting for machine to come up
	I1026 01:01:49.810556   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:49.811065   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:49.811095   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:49.811021   28699 retry.go:31] will retry after 3.235213993s: waiting for machine to come up
	I1026 01:01:53.047405   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:53.047859   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:53.047896   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:53.047798   28699 retry.go:31] will retry after 2.928776248s: waiting for machine to come up
	I1026 01:01:55.979004   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:55.979474   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:55.979500   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:55.979422   28699 retry.go:31] will retry after 4.662153221s: waiting for machine to come up
	I1026 01:02:00.643538   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644004   27934 main.go:141] libmachine: (ha-300623-m03) Found IP for machine: 192.168.39.180
	I1026 01:02:00.644032   27934 main.go:141] libmachine: (ha-300623-m03) Reserving static IP address...
	I1026 01:02:00.644046   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has current primary IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644407   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find host DHCP lease matching {name: "ha-300623-m03", mac: "52:54:00:c1:38:db", ip: "192.168.39.180"} in network mk-ha-300623
	I1026 01:02:00.720512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Getting to WaitForSSH function...
	I1026 01:02:00.720543   27934 main.go:141] libmachine: (ha-300623-m03) Reserved static IP address: 192.168.39.180
	I1026 01:02:00.720555   27934 main.go:141] libmachine: (ha-300623-m03) Waiting for SSH to be available...
	I1026 01:02:00.723096   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723544   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.723574   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723782   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH client type: external
	I1026 01:02:00.723802   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa (-rw-------)
	I1026 01:02:00.723832   27934 main.go:141] libmachine: (ha-300623-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:02:00.723848   27934 main.go:141] libmachine: (ha-300623-m03) DBG | About to run SSH command:
	I1026 01:02:00.723870   27934 main.go:141] libmachine: (ha-300623-m03) DBG | exit 0
	I1026 01:02:00.849883   27934 main.go:141] libmachine: (ha-300623-m03) DBG | SSH cmd err, output: <nil>: 
	I1026 01:02:00.850375   27934 main.go:141] libmachine: (ha-300623-m03) KVM machine creation complete!
	I1026 01:02:00.850699   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:00.851242   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851412   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851548   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:02:00.851566   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetState
	I1026 01:02:00.852882   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:02:00.852898   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:02:00.852910   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:02:00.852920   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.855365   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.855806   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.855828   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.856011   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.856209   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856384   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856518   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.856737   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.856963   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.856977   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:02:00.960586   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:00.960610   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:02:00.960620   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.963489   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.963835   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.963855   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.964027   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.964212   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964377   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964520   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.964689   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.964839   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.964850   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:02:01.070154   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:02:01.070243   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:02:01.070253   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:02:01.070260   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070494   27934 buildroot.go:166] provisioning hostname "ha-300623-m03"
	I1026 01:02:01.070509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070670   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.073236   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073643   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.073674   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.074025   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074141   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074309   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.074462   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.074668   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.074685   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m03 && echo "ha-300623-m03" | sudo tee /etc/hostname
	I1026 01:02:01.191755   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m03
	
	I1026 01:02:01.191785   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.194565   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.194928   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.194957   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.195106   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.195276   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195444   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195582   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.195873   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.196084   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.196105   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:02:01.305994   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:01.306027   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:02:01.306044   27934 buildroot.go:174] setting up certificates
	I1026 01:02:01.306053   27934 provision.go:84] configureAuth start
	I1026 01:02:01.306066   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.306391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.308943   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309271   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.309299   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309440   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.311607   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.311976   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.312003   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.312212   27934 provision.go:143] copyHostCerts
	I1026 01:02:01.312245   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312277   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:02:01.312286   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312350   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:02:01.312423   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312441   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:02:01.312445   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312471   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:02:01.312516   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:02:01.312540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312560   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:02:01.312651   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m03 san=[127.0.0.1 192.168.39.180 ha-300623-m03 localhost minikube]
	I1026 01:02:01.465531   27934 provision.go:177] copyRemoteCerts
	I1026 01:02:01.465583   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:02:01.465608   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.468185   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468506   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.468531   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468753   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.468983   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.469158   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.469293   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.551550   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:02:01.551614   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:02:01.576554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:02:01.576635   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:02:01.602350   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:02:01.602435   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:02:01.626219   27934 provision.go:87] duration metric: took 320.153705ms to configureAuth
	I1026 01:02:01.626250   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:02:01.626469   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:01.626540   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.629202   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.629569   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629826   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.630038   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630193   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630349   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.630520   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.630681   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.630695   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:02:01.850626   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:02:01.850656   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:02:01.850666   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetURL
	I1026 01:02:01.851985   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using libvirt version 6000000
	I1026 01:02:01.853953   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854248   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.854277   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854395   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:02:01.854410   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:02:01.854416   27934 client.go:171] duration metric: took 23.935075321s to LocalClient.Create
	I1026 01:02:01.854435   27934 start.go:167] duration metric: took 23.935138215s to libmachine.API.Create "ha-300623"
	I1026 01:02:01.854442   27934 start.go:293] postStartSetup for "ha-300623-m03" (driver="kvm2")
	I1026 01:02:01.854455   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:02:01.854473   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:01.854694   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:02:01.854714   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.856743   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857033   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.857061   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857181   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.857358   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.857509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.857636   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.939727   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:02:01.943512   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:02:01.943536   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:02:01.943602   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:02:01.943673   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:02:01.943683   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:02:01.943769   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:02:01.952556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:01.974588   27934 start.go:296] duration metric: took 120.131633ms for postStartSetup
	I1026 01:02:01.974635   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:01.975249   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.977630   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.977939   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.977966   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.978201   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:02:01.978439   27934 start.go:128] duration metric: took 24.077650452s to createHost
	I1026 01:02:01.978471   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.981153   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981663   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.981690   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981836   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.981994   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982159   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982318   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.982480   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.982694   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.982711   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:02:02.085984   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904522.063699456
	
	I1026 01:02:02.086012   27934 fix.go:216] guest clock: 1729904522.063699456
	I1026 01:02:02.086022   27934 fix.go:229] Guest: 2024-10-26 01:02:02.063699456 +0000 UTC Remote: 2024-10-26 01:02:01.978456379 +0000 UTC m=+140.913817945 (delta=85.243077ms)
	I1026 01:02:02.086043   27934 fix.go:200] guest clock delta is within tolerance: 85.243077ms
	I1026 01:02:02.086049   27934 start.go:83] releasing machines lock for "ha-300623-m03", held for 24.185376811s
	I1026 01:02:02.086067   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.086287   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:02.088913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.089268   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.089295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.091504   27934 out.go:177] * Found network options:
	I1026 01:02:02.092955   27934 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.62
	W1026 01:02:02.094206   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.094236   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.094251   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094989   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.095095   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:02:02.095133   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	W1026 01:02:02.095154   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.095180   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.095247   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:02:02.095268   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:02.097751   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098028   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098086   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098111   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098235   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098497   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098514   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098524   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.098666   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.098717   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098843   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098984   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.099112   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.334862   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:02:02.340486   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:02:02.340547   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:02:02.357805   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:02:02.357834   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:02:02.357898   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:02:02.374996   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:02:02.392000   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:02:02.392086   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:02:02.407807   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:02:02.423965   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:02:02.552274   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:02:02.700711   27934 docker.go:233] disabling docker service ...
	I1026 01:02:02.700771   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:02:02.718236   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:02:02.732116   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:02:02.868905   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:02:02.980683   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:02:02.994225   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:02:03.012791   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:02:03.012857   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.023082   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:02:03.023153   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.033232   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.045462   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.056259   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:02:03.067151   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.077520   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.096669   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.106891   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:02:03.116392   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:02:03.116458   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:02:03.129779   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:02:03.139745   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:03.248476   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:02:03.335933   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:02:03.336001   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:02:03.341028   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:02:03.341087   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:02:03.344865   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:02:03.384107   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:02:03.384182   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.413095   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.443714   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:02:03.445737   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:02:03.447586   27934 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.62
	I1026 01:02:03.449031   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:03.452447   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.452878   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:03.452917   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.453179   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:02:03.457652   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:03.471067   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:02:03.471351   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:03.471669   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.471714   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.487194   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I1026 01:02:03.487657   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.488105   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.488127   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.488437   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.488638   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:02:03.490095   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:03.490500   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.490536   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.506020   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1026 01:02:03.506418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.506947   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.506976   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.507350   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.507527   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:03.507727   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.180
	I1026 01:02:03.507740   27934 certs.go:194] generating shared ca certs ...
	I1026 01:02:03.507758   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.507883   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:02:03.507924   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:02:03.507933   27934 certs.go:256] generating profile certs ...
	I1026 01:02:03.508003   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:02:03.508028   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0
	I1026 01:02:03.508039   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:02:03.728822   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 ...
	I1026 01:02:03.728854   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0: {Name:mk13b323a89a31df62edb3f93e2caa9ef5c95608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729026   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 ...
	I1026 01:02:03.729038   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0: {Name:mk931eb52f244ae5eac81e077cce00cf1844fe8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729110   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:02:03.729242   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:02:03.729367   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:02:03.729382   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:02:03.729396   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:02:03.729409   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:02:03.729443   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:02:03.729457   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:02:03.729475   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:02:03.729491   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:02:03.749554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:02:03.749647   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:02:03.749686   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:02:03.749696   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:02:03.749718   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:02:03.749740   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:02:03.749762   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:02:03.749801   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:03.749827   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:03.749842   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:02:03.749854   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:02:03.749890   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:03.752989   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753341   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:03.753364   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753579   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:03.753776   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:03.753920   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:03.754076   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:03.829849   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:02:03.834830   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:02:03.846065   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:02:03.849963   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:02:03.859787   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:02:03.863509   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:02:03.873244   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:02:03.876871   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:02:03.892364   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:02:03.896520   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:02:03.907397   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:02:03.911631   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:02:03.924039   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:02:03.948397   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:02:03.971545   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:02:03.994742   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:02:04.019083   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1026 01:02:04.043193   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:02:04.066431   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:02:04.089556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:02:04.112422   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:02:04.137648   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:02:04.163111   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:02:04.187974   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:02:04.204419   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:02:04.221407   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:02:04.240446   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:02:04.258125   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:02:04.274506   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:02:04.290927   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:02:04.307309   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:02:04.312975   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:02:04.323808   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328222   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328286   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.334015   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:02:04.344665   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:02:04.355274   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359793   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359862   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.365345   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:02:04.376251   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:02:04.387304   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391720   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391792   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.397948   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:02:04.409356   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:02:04.413518   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:02:04.413569   27934 kubeadm.go:934] updating node {m03 192.168.39.180 8443 v1.31.2 crio true true} ...
	I1026 01:02:04.413666   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:02:04.413689   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:02:04.413726   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:02:04.429892   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:02:04.429970   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:02:04.430030   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.439803   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:02:04.439857   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1026 01:02:04.448847   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1026 01:02:04.448867   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448890   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:04.448924   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:02:04.448969   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.449022   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.453004   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:02:04.453036   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:02:04.477386   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.477445   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:02:04.477465   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:02:04.477513   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.523830   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:02:04.523877   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:02:05.306345   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:02:05.316372   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 01:02:05.333527   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:02:05.350382   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:02:05.366102   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:02:05.369984   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:05.381182   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:05.496759   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:05.512263   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:05.512689   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:05.512740   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:05.531279   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1026 01:02:05.531819   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:05.532966   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:05.532989   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:05.533339   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:05.533529   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:05.533682   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:02:05.533839   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:02:05.533866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:05.536583   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537028   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:05.537057   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537282   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:05.537491   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:05.537676   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:05.537795   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:05.697156   27934 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:05.697206   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443"
	I1026 01:02:29.292626   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443": (23.595390034s)
	I1026 01:02:29.292667   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:02:29.885895   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m03 minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:02:29.997019   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:02:30.136451   27934 start.go:319] duration metric: took 24.602766496s to joinCluster
	I1026 01:02:30.136544   27934 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:30.137000   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:30.137905   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:02:30.139044   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:30.389764   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:30.425326   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:02:30.425691   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:02:30.425759   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:02:30.426058   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:30.426159   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.426170   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.426180   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.426189   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.431156   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:30.926776   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.926801   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.926811   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.926819   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.930142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.426736   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.426771   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.426783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.426791   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.430233   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.926732   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.926744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.926753   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.929704   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:32.426493   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.426514   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.426522   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.426527   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.429836   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:32.430379   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:32.926337   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.926363   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.926376   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.926383   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.929516   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:33.426312   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.426334   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.426342   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.426364   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.430395   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:33.927020   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.927043   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.927050   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.927053   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.930539   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.426611   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.426637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.426649   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.426653   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.429762   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.926585   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.926607   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.926616   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.926622   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.929963   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.930447   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:35.426739   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.426760   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.426786   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.426791   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.429676   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:35.926699   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.926723   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.926731   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.926735   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.930444   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.427052   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.427063   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.427069   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.430961   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.926688   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.926715   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.926726   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.926732   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.930504   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.931114   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:37.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.426568   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.426581   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.426588   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.434793   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:37.926670   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.926699   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.926711   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.926717   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.929364   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:38.427306   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.427327   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.427335   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.427339   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.434499   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:38.926882   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.926902   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.926911   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.926914   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.930831   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:38.931460   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:39.427252   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.427274   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.427283   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.427286   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.430650   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:39.926620   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.926643   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.926654   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.926661   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.930077   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.426363   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.426396   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.426408   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.426414   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.429976   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.926280   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.926310   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.926320   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.926325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.929942   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.426556   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.426563   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.426568   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.430315   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.431209   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:41.926498   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.926522   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.926529   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.926534   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.929738   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.426973   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.427006   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.427013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.427019   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.430244   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.927247   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.927275   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.927283   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.927288   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.930906   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.426731   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.426759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.426768   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.426773   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.430712   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.431301   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:43.926784   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.926823   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.926832   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.926835   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.929957   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.427237   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.427258   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.427266   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.427270   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.430769   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.926731   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.926740   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.926743   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.930247   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.427043   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.427065   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.427074   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.427079   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.430820   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.431387   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:45.927275   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.927296   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.927304   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.927306   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.930627   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.426245   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.426266   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.426274   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.426278   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.429561   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.926352   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.926373   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.926384   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.926390   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.929454   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.426420   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.426462   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.426472   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.426477   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.430019   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.926864   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.926889   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.926900   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.926906   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.929997   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.930569   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:48.426656   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.426693   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.426709   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.426716   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.435417   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:48.436037   27934 node_ready.go:49] node "ha-300623-m03" has status "Ready":"True"
	I1026 01:02:48.436062   27934 node_ready.go:38] duration metric: took 18.009981713s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:48.436077   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:48.436165   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:48.436180   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.436190   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.436203   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.442639   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:02:48.450258   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.450343   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:02:48.450349   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.450356   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.450360   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.454261   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.454872   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.454888   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.454895   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.454900   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.459379   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:48.460137   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.460155   27934 pod_ready.go:82] duration metric: took 9.869467ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460165   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460215   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:02:48.460224   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.460231   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.460233   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.463232   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.463771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.463783   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.463792   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.463797   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.466281   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.466732   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.466748   27934 pod_ready.go:82] duration metric: took 6.577285ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466762   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466818   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:02:48.466826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.466833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.466837   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.469268   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.469931   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.469946   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.469953   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.469957   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.472212   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.472664   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.472682   27934 pod_ready.go:82] duration metric: took 5.914156ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472691   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472750   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:02:48.472759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.472766   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.472770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.475167   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.475777   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:48.475794   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.475802   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.475806   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.478259   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.478687   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.478703   27934 pod_ready.go:82] duration metric: took 6.006167ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.478711   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.627599   27934 request.go:632] Waited for 148.830245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627657   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627667   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.627674   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.627680   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.631663   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.827561   27934 request.go:632] Waited for 195.345637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827630   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.827645   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.827649   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.831042   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.831791   27934 pod_ready.go:93] pod "etcd-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.831815   27934 pod_ready.go:82] duration metric: took 353.094836ms for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.831835   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.027283   27934 request.go:632] Waited for 195.388128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027360   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027365   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.027373   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.027380   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.030439   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.227538   27934 request.go:632] Waited for 196.377694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227614   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227627   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.227643   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.227650   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.230823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.231339   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.231360   27934 pod_ready.go:82] duration metric: took 399.517961ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.231374   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.426746   27934 request.go:632] Waited for 195.299777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426820   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.426833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.426842   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.430033   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.626896   27934 request.go:632] Waited for 196.298512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626964   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626970   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.626977   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.626980   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.630142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.630626   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.630645   27934 pod_ready.go:82] duration metric: took 399.259883ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.630655   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.826666   27934 request.go:632] Waited for 195.934282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826722   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826727   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.826739   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.826744   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.830021   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.027111   27934 request.go:632] Waited for 196.361005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027198   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027210   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.027222   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.027231   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.030533   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.031215   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.031238   27934 pod_ready.go:82] duration metric: took 400.574994ms for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.031268   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.227253   27934 request.go:632] Waited for 195.903041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227309   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227314   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.227321   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.227325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.230415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.427535   27934 request.go:632] Waited for 196.340381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427594   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427602   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.427612   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.427619   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.430823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.431395   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.431413   27934 pod_ready.go:82] duration metric: took 400.135776ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.431426   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.626990   27934 request.go:632] Waited for 195.470744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627069   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627075   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.627082   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.627087   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.630185   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.827370   27934 request.go:632] Waited for 196.34647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827442   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827448   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.827455   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.827461   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.831085   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.831842   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.831859   27934 pod_ready.go:82] duration metric: took 400.426225ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.831869   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.027015   27934 request.go:632] Waited for 195.078027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027084   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027092   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.027099   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.027103   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.031047   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.227422   27934 request.go:632] Waited for 195.619523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227479   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227484   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.227492   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.227495   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.231982   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:51.232544   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.232570   27934 pod_ready.go:82] duration metric: took 400.691296ms for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.232584   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.427652   27934 request.go:632] Waited for 194.988908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427748   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427756   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.427763   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.427769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.431107   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.627383   27934 request.go:632] Waited for 195.646071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627443   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627450   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.627459   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.627465   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.630345   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:51.630913   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.630940   27934 pod_ready.go:82] duration metric: took 398.33791ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.630957   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.826903   27934 request.go:632] Waited for 195.872288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826976   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826981   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.826989   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.826995   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.830596   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.027634   27934 request.go:632] Waited for 196.404478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027720   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027729   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.027740   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.027744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.031724   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.032488   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.032512   27934 pod_ready.go:82] duration metric: took 401.542551ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.032525   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.227636   27934 request.go:632] Waited for 195.035156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.227705   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.227713   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.230866   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.426675   27934 request.go:632] Waited for 195.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426757   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426765   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.426775   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.426782   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.429979   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.430570   27934 pod_ready.go:93] pod "kube-proxy-mv7sf" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.430594   27934 pod_ready.go:82] duration metric: took 398.058369ms for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.430608   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.627616   27934 request.go:632] Waited for 196.938648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.627704   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.627709   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.631135   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.827333   27934 request.go:632] Waited for 195.390365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827388   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827397   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.827404   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.827409   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.830746   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.831581   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.831599   27934 pod_ready.go:82] duration metric: took 400.983275ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.831611   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.026899   27934 request.go:632] Waited for 195.225563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026954   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026959   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.026967   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.026971   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.030270   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.227500   27934 request.go:632] Waited for 196.386112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227559   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227564   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.227572   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.227577   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.231336   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.231867   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.231885   27934 pod_ready.go:82] duration metric: took 400.266151ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.231896   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.426974   27934 request.go:632] Waited for 194.996598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427030   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.427037   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.427041   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.430377   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.626766   27934 request.go:632] Waited for 195.735993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626829   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.626836   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.626840   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.630167   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.630954   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.630975   27934 pod_ready.go:82] duration metric: took 399.071645ms for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.630992   27934 pod_ready.go:39] duration metric: took 5.19490109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:53.631015   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:02:53.631076   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:02:53.646977   27934 api_server.go:72] duration metric: took 23.510394339s to wait for apiserver process to appear ...
	I1026 01:02:53.647007   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:02:53.647030   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:02:53.651895   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:02:53.651966   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:02:53.651972   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.651979   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.651983   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.652674   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:02:53.652802   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:02:53.652821   27934 api_server.go:131] duration metric: took 5.805941ms to wait for apiserver health ...
	I1026 01:02:53.652830   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:02:53.827168   27934 request.go:632] Waited for 174.273301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827222   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827228   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.827235   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.827240   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.834306   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:53.841838   27934 system_pods.go:59] 24 kube-system pods found
	I1026 01:02:53.841872   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:53.841879   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:53.841885   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:53.841891   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:53.841897   27934 system_pods.go:61] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:53.841901   27934 system_pods.go:61] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:53.841906   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:53.841911   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:53.841916   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:53.841921   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:53.841927   27934 system_pods.go:61] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:53.841932   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:53.841938   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:53.841945   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:53.841951   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:53.841959   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:53.841964   27934 system_pods.go:61] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:53.841970   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:53.841976   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:53.841982   27934 system_pods.go:61] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:53.841992   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:53.841998   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:53.842006   27934 system_pods.go:61] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:53.842011   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:53.842020   27934 system_pods.go:74] duration metric: took 189.182306ms to wait for pod list to return data ...
	I1026 01:02:53.842033   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:02:54.027353   27934 request.go:632] Waited for 185.245125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027412   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027420   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.027431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.027441   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.030973   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.031077   27934 default_sa.go:45] found service account: "default"
	I1026 01:02:54.031089   27934 default_sa.go:55] duration metric: took 189.048618ms for default service account to be created ...
	I1026 01:02:54.031098   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:02:54.227423   27934 request.go:632] Waited for 196.255704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227493   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.227507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.227517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.232907   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:02:54.240539   27934 system_pods.go:86] 24 kube-system pods found
	I1026 01:02:54.240565   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:54.240571   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:54.240574   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:54.240578   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:54.240582   27934 system_pods.go:89] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:54.240586   27934 system_pods.go:89] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:54.240589   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:54.240592   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:54.240595   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:54.240599   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:54.240602   27934 system_pods.go:89] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:54.240606   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:54.240609   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:54.240613   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:54.240616   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:54.240620   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:54.240624   27934 system_pods.go:89] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:54.240627   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:54.240632   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:54.240635   27934 system_pods.go:89] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:54.240641   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:54.240644   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:54.240647   27934 system_pods.go:89] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:54.240650   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:54.240656   27934 system_pods.go:126] duration metric: took 209.550822ms to wait for k8s-apps to be running ...
	I1026 01:02:54.240667   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:02:54.240705   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:54.259476   27934 system_svc.go:56] duration metric: took 18.80003ms WaitForService to wait for kubelet
	I1026 01:02:54.259503   27934 kubeadm.go:582] duration metric: took 24.122925603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:02:54.259520   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:02:54.427334   27934 request.go:632] Waited for 167.728559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427409   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427417   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.427430   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.427440   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.431191   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.432324   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432349   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432365   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432369   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432378   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432383   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432391   27934 node_conditions.go:105] duration metric: took 172.867066ms to run NodePressure ...
	I1026 01:02:54.432404   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:02:54.432431   27934 start.go:255] writing updated cluster config ...
	I1026 01:02:54.432784   27934 ssh_runner.go:195] Run: rm -f paused
	I1026 01:02:54.484591   27934 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 01:02:54.487070   27934 out.go:177] * Done! kubectl is now configured to use "ha-300623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.724829698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904793724803821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eb8597d-82b6-49fb-af3d-9ce8a93176c9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.725604195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c63e2252-894f-42ef-b6ac-79d8b0c57c34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.725697992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c63e2252-894f-42ef-b6ac-79d8b0c57c34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.725961891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c63e2252-894f-42ef-b6ac-79d8b0c57c34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.765715761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a2e07a1-8662-495a-a073-6983f088590a name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.765791746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a2e07a1-8662-495a-a073-6983f088590a name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.767067536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc2ab9eb-8c5b-481e-9716-7e1eda6ad496 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.767512749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904793767490431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc2ab9eb-8c5b-481e-9716-7e1eda6ad496 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.768003598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87005983-de54-468e-ade6-ac1b7a21ec01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.768079564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87005983-de54-468e-ade6-ac1b7a21ec01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.768307426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87005983-de54-468e-ade6-ac1b7a21ec01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.804181259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b1a3a14-19f6-4ec8-b730-bba82d75fccf name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.804271559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b1a3a14-19f6-4ec8-b730-bba82d75fccf name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.805266674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60bd3285-8b0e-4d10-8aef-9d2b2de0e54a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.805859145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904793805834470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60bd3285-8b0e-4d10-8aef-9d2b2de0e54a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.806367504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e288130-cea5-4ed6-b115-46bd9f35be5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.806436546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e288130-cea5-4ed6-b115-46bd9f35be5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.806713746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e288130-cea5-4ed6-b115-46bd9f35be5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.847931730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4eab15a-835d-4f53-bdb6-5134a249cd23 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.848013798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4eab15a-835d-4f53-bdb6-5134a249cd23 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.849518542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1aa01c5-7b3b-48cd-a949-305daad43001 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.850199435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904793850161259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1aa01c5-7b3b-48cd-a949-305daad43001 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.850989638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f4037c3-a738-4230-a893-cf47b4082549 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.851093441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f4037c3-a738-4230-a893-cf47b4082549 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:33 ha-300623 crio[655]: time="2024-10-26 01:06:33.851433409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f4037c3-a738-4230-a893-cf47b4082549 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85cbf0b8850a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   731eca9181f8b       busybox-7dff88458-x8rtl
	ca2bd9d7fe0a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   20e3c054f64b8       coredns-7c65d6cfc9-ntmgc
	56c849c3f6d25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   d580ea18268bf       coredns-7c65d6cfc9-qx24f
	862c0633984db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   f6635176e0517       storage-provisioner
	d6d0d55128c15       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   cffe8a0cf602c       kindnet-4cqmf
	f7fca08cb5de6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   94078692adcf1       kube-proxy-65rns
	a103c72040168       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   620e95994188b       kube-vip-ha-300623
	47a0b2ec9c50d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   f86f0547d7e3f       kube-controller-manager-ha-300623
	3e321e090fa4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a63bff1c62868       etcd-ha-300623
	3c25e47b58ddc       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9b38c5bcef6f6       kube-scheduler-ha-300623
	3bcea9b84ac37       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   e9bc0343ef669       kube-apiserver-ha-300623
	
	
	==> coredns [56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d] <==
	[INFO] 10.244.0.4:35752 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083964s
	[INFO] 10.244.0.4:46160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070172s
	[INFO] 10.244.2.2:48496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233704s
	[INFO] 10.244.2.2:43326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002692245s
	[INFO] 10.244.1.2:54632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145197s
	[INFO] 10.244.1.2:39137 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866788s
	[INFO] 10.244.1.2:37569 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241474s
	[INFO] 10.244.0.4:42983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170463s
	[INFO] 10.244.0.4:34095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002204796s
	[INFO] 10.244.0.4:47258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001867963s
	[INFO] 10.244.0.4:59491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141493s
	[INFO] 10.244.0.4:57514 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133403s
	[INFO] 10.244.0.4:45585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174758s
	[INFO] 10.244.2.2:57387 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165086s
	[INFO] 10.244.2.2:37898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136051s
	[INFO] 10.244.1.2:45240 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130797s
	[INFO] 10.244.1.2:40585 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000259318s
	[INFO] 10.244.1.2:54189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089088s
	[INFO] 10.244.1.2:56872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108098s
	[INFO] 10.244.0.4:43642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083444s
	[INFO] 10.244.2.2:37138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161058s
	[INFO] 10.244.1.2:45522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237498s
	[INFO] 10.244.1.2:48964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122296s
	[INFO] 10.244.0.4:46128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168182s
	[INFO] 10.244.0.4:35635 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143147s
	
	
	==> coredns [ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758] <==
	[INFO] 10.244.2.2:54963 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004547023s
	[INFO] 10.244.2.2:34531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244595s
	[INFO] 10.244.2.2:44217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000362208s
	[INFO] 10.244.2.2:60780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018037s
	[INFO] 10.244.2.2:60725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000259265s
	[INFO] 10.244.2.2:33992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168214s
	[INFO] 10.244.1.2:48441 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000237097s
	[INFO] 10.244.1.2:50414 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002508011s
	[INFO] 10.244.1.2:36962 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211094s
	[INFO] 10.244.1.2:45147 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163251s
	[INFO] 10.244.1.2:56149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125966s
	[INFO] 10.244.0.4:56735 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092196s
	[INFO] 10.244.0.4:37487 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002015s
	[INFO] 10.244.2.2:53825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125794s
	[INFO] 10.244.2.2:52505 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213989s
	[INFO] 10.244.0.4:37131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125177s
	[INFO] 10.244.0.4:45742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131329s
	[INFO] 10.244.0.4:52634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089226s
	[INFO] 10.244.2.2:58146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286556s
	[INFO] 10.244.2.2:59488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000218728s
	[INFO] 10.244.2.2:51165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028421s
	[INFO] 10.244.1.2:37736 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160474s
	[INFO] 10.244.1.2:60585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000238531s
	[INFO] 10.244.0.4:46233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078598s
	[INFO] 10.244.0.4:39578 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277206s
	
	
	==> describe nodes <==
	Name:               ha-300623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:00:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-300623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92684f32bf5c4a5ea50d57cd59f5b8ee
	  System UUID:                92684f32-bf5c-4a5e-a50d-57cd59f5b8ee
	  Boot ID:                    3d5330c9-a2ef-4296-ab11-4c9bb32f97df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x8rtl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-ntmgc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-qx24f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-300623                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-4cqmf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-300623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-300623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-65rns                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-300623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-300623                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-300623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-300623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-300623 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  NodeReady                5m57s  kubelet          Node ha-300623 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	
	
	Name:               ha-300623-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:01:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-300623-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 619e0e81a0ef43a9b2e79bbc4eb9355e
	  System UUID:                619e0e81-a0ef-43a9-b2e7-9bbc4eb9355e
	  Boot ID:                    89b92f6c-664b-4721-8f8c-216a0ad0c2d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qtdcl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-300623-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-g5bkb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-300623-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-300623-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-7hn2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-300623-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-vip-ha-300623-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-300623-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-300623-m02 status is now: NodeNotReady
	
	
	Name:               ha-300623-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:02:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    ha-300623-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97987e99f2594f70b58fe3aa149b6c7c
	  System UUID:                97987e99-f259-4f70-b58f-e3aa149b6c7c
	  Boot ID:                    7e140c77-fbc1-46f9-addb-72cf937d1703
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mbn94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-300623-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-2v827                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-300623-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-300623-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-mv7sf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-300623-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-300623-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  RegisteredNode           4m8s                 node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-300623-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	
	
	Name:               ha-300623-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-300623-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 505edce099ab4a75b83037ad7ab46771
	  System UUID:                505edce0-99ab-4a75-b830-37ad7ab46771
	  Boot ID:                    896f9280-eb70-46a8-9d85-c3814086494a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsnn6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-4zk2k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-300623-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-300623-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050258] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782226] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951939] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.521399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct26 01:00] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.061621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060766] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.166618] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145628] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268359] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.874441] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.666530] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060776] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.257866] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.091250] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.528305] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.572352] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 01:01] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901] <==
	{"level":"warn","ts":"2024-10-26T01:06:34.101555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.111312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.115349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.123577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.133489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.140734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.145271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.149786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.159961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.160143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.168216Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.173692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.177691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.182762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.187890Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.193892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.199731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.203993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.207228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.211562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.217067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.222092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.259204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.263535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:34.280967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:06:34 up 6 min,  0 users,  load average: 0.15, 0.24, 0.13
	Linux ha-300623 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde] <==
	I1026 01:05:57.184039       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:07.182751       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:07.182887       1 main.go:300] handling current node
	I1026 01:06:07.182926       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:07.182953       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:07.183583       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:07.183731       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:07.184425       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:07.184462       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:17.174569       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:17.174737       1 main.go:300] handling current node
	I1026 01:06:17.174803       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:17.174825       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:17.175067       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:17.175100       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:17.175206       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:17.175228       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:27.175173       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:27.175288       1 main.go:300] handling current node
	I1026 01:06:27.175317       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:27.175335       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:27.175551       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:27.175580       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:27.175762       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:27.175795       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d] <==
	W1026 01:00:17.926981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1026 01:00:17.928181       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 01:00:17.935826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:00:17.947904       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 01:00:18.894624       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 01:00:18.916292       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:00:19.043184       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 01:00:23.502518       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:00:23.580105       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1026 01:03:00.396346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48596: use of closed network connection
	E1026 01:03:00.597696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48608: use of closed network connection
	E1026 01:03:00.779383       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48638: use of closed network connection
	E1026 01:03:00.968960       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48650: use of closed network connection
	E1026 01:03:01.159859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48672: use of closed network connection
	E1026 01:03:01.356945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48682: use of closed network connection
	E1026 01:03:01.529718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1026 01:03:01.709409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60606: use of closed network connection
	E1026 01:03:01.891333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60636: use of closed network connection
	E1026 01:03:02.183836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60668: use of closed network connection
	E1026 01:03:02.371592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60688: use of closed network connection
	E1026 01:03:02.545427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60698: use of closed network connection
	E1026 01:03:02.716320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60708: use of closed network connection
	E1026 01:03:02.895527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60734: use of closed network connection
	E1026 01:03:03.082972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60756: use of closed network connection
	W1026 01:04:27.938129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.180 192.168.39.183]
	
	
	==> kube-controller-manager [47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3] <==
	I1026 01:03:33.037458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.051536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.162489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	E1026 01:03:33.296244       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"ff6c8323-43e2-4224-a2c5-fbee23186204\", ResourceVersion:\"911\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 26, 1, 0, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\",
\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\"
:\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b16180), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\
", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641908), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeCl
aimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641920), EmptyDir:(*v1.EmptyDirVolumeSource)(n
il), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVo
lumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641938), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azur
eFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b161a0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSou
rce)(0xc001b161e0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false,
RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc002a7eba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002879af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002835100), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ove
rhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0029fa100)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002879b40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1026 01:03:33.604085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:35.173961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.911095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.978536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.761108       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-300623-m04"
	I1026 01:03:37.763013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.822795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:43.288569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:52.993775       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:03:52.994235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:53.016162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:55.127200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:03.835355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:47.785209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:04:47.785779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.821461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.859957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.530512ms"
	I1026 01:04:47.860782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.115µs"
	I1026 01:04:50.162222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:52.952538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	
	
	==> kube-proxy [f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 01:00:25.689413       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 01:00:25.723767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1026 01:00:25.723854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 01:00:25.758166       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 01:00:25.758214       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 01:00:25.758247       1 server_linux.go:169] "Using iptables Proxier"
	I1026 01:00:25.760715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 01:00:25.761068       1 server.go:483] "Version info" version="v1.31.2"
	I1026 01:00:25.761102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:00:25.763718       1 config.go:199] "Starting service config controller"
	I1026 01:00:25.763757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 01:00:25.763790       1 config.go:105] "Starting endpoint slice config controller"
	I1026 01:00:25.763796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 01:00:25.764426       1 config.go:328] "Starting node config controller"
	I1026 01:00:25.764461       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 01:00:25.864157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 01:00:25.864237       1 shared_informer.go:320] Caches are synced for service config
	I1026 01:00:25.864661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b] <==
	I1026 01:02:26.440503       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2v827" node="ha-300623-m03"
	E1026 01:02:55.345123       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.345196       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1d2aa5b5-e44c-4423-a263-a19406face68(default/busybox-7dff88458-qtdcl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qtdcl"
	E1026 01:02:55.345218       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" pod="default/busybox-7dff88458-qtdcl"
	I1026 01:02:55.345275       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.394267       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394343       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5(default/busybox-7dff88458-x8rtl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x8rtl"
	E1026 01:02:55.394364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" pod="default/busybox-7dff88458-x8rtl"
	I1026 01:02:55.394386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394962       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:02:55.395010       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dd5257f3-d0ba-4672-9836-da890e32fb0d(default/busybox-7dff88458-mbn94) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-mbn94"
	E1026 01:02:55.395023       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" pod="default/busybox-7dff88458-mbn94"
	I1026 01:02:55.395037       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:03:33.099592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.101341       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e40741c-73a0-41fa-b38f-a59fed42525b(kube-system/kube-proxy-4zk2k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4zk2k"
	E1026 01:03:33.101520       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-4zk2k"
	I1026 01:03:33.101594       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.102404       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.109277       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 952ba5f9-93b1-4543-8b73-3ac1600315fc(kube-system/kindnet-l58kk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l58kk"
	E1026 01:03:33.109487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-l58kk"
	I1026 01:03:33.109689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.136820       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lm6x" node="ha-300623-m04"
	E1026 01:03:33.137312       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-5lm6x"
	E1026 01:03:33.152104       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jhv9k" node="ha-300623-m04"
	E1026 01:03:33.153545       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-jhv9k"
	
	
	==> kubelet <==
	Oct 26 01:05:19 ha-300623 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 01:05:19 ha-300623 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 01:05:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:05:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171492    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171604    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173388    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173412    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176311    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176778    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179258    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179567    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181750    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181791    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183203    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183277    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.106419    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185785    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185827    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188435    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188477    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1026 01:06:36.821924   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:06:37.284803   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.411072347s)
ha_test.go:415: expected profile "ha-300623" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-300623\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-300623\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-300623\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.183\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.62\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.180\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.197\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevi
rt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\
",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (1.299086147s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m03_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:59:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:59:41.102327   27934 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:41.102422   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102427   27934 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:41.102431   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102629   27934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:41.103175   27934 out.go:352] Setting JSON to false
	I1026 00:59:41.103986   27934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2521,"bootTime":1729901860,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:41.104085   27934 start.go:139] virtualization: kvm guest
	I1026 00:59:41.106060   27934 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:41.107343   27934 notify.go:220] Checking for updates...
	I1026 00:59:41.107361   27934 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:41.108566   27934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:41.109853   27934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:41.111166   27934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.112531   27934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:41.113798   27934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:41.115167   27934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:41.148833   27934 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:59:41.150115   27934 start.go:297] selected driver: kvm2
	I1026 00:59:41.150128   27934 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:59:41.150139   27934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:41.150812   27934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.150910   27934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:59:41.165692   27934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:59:41.165750   27934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:59:41.166043   27934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:59:41.166082   27934 cni.go:84] Creating CNI manager for ""
	I1026 00:59:41.166138   27934 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1026 00:59:41.166151   27934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:59:41.166210   27934 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:41.166340   27934 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.168250   27934 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 00:59:41.169625   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:59:41.169671   27934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:59:41.169699   27934 cache.go:56] Caching tarball of preloaded images
	I1026 00:59:41.169771   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:59:41.169781   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:59:41.170066   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 00:59:41.170083   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json: {Name:mkc18d341848fb714503df8b4bfc42be69331fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:59:41.170205   27934 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:59:41.170231   27934 start.go:364] duration metric: took 14.614µs to acquireMachinesLock for "ha-300623"
	I1026 00:59:41.170247   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:59:41.170298   27934 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:59:41.171896   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 00:59:41.172034   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:41.172078   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:41.186522   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I1026 00:59:41.186988   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:41.187517   27934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:41.187539   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:41.187925   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:41.188146   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 00:59:41.188284   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 00:59:41.188436   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 00:59:41.188472   27934 client.go:168] LocalClient.Create starting
	I1026 00:59:41.188506   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:59:41.188539   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188554   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188604   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:59:41.188622   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188635   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188652   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 00:59:41.188664   27934 main.go:141] libmachine: (ha-300623) Calling .PreCreateCheck
	I1026 00:59:41.189023   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 00:59:41.189374   27934 main.go:141] libmachine: Creating machine...
	I1026 00:59:41.189386   27934 main.go:141] libmachine: (ha-300623) Calling .Create
	I1026 00:59:41.189526   27934 main.go:141] libmachine: (ha-300623) Creating KVM machine...
	I1026 00:59:41.190651   27934 main.go:141] libmachine: (ha-300623) DBG | found existing default KVM network
	I1026 00:59:41.191301   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.191170   27957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1026 00:59:41.191329   27934 main.go:141] libmachine: (ha-300623) DBG | created network xml: 
	I1026 00:59:41.191339   27934 main.go:141] libmachine: (ha-300623) DBG | <network>
	I1026 00:59:41.191366   27934 main.go:141] libmachine: (ha-300623) DBG |   <name>mk-ha-300623</name>
	I1026 00:59:41.191399   27934 main.go:141] libmachine: (ha-300623) DBG |   <dns enable='no'/>
	I1026 00:59:41.191415   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191424   27934 main.go:141] libmachine: (ha-300623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:59:41.191431   27934 main.go:141] libmachine: (ha-300623) DBG |     <dhcp>
	I1026 00:59:41.191438   27934 main.go:141] libmachine: (ha-300623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:59:41.191445   27934 main.go:141] libmachine: (ha-300623) DBG |     </dhcp>
	I1026 00:59:41.191450   27934 main.go:141] libmachine: (ha-300623) DBG |   </ip>
	I1026 00:59:41.191457   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191462   27934 main.go:141] libmachine: (ha-300623) DBG | </network>
	I1026 00:59:41.191489   27934 main.go:141] libmachine: (ha-300623) DBG | 
	I1026 00:59:41.196331   27934 main.go:141] libmachine: (ha-300623) DBG | trying to create private KVM network mk-ha-300623 192.168.39.0/24...
	I1026 00:59:41.258139   27934 main.go:141] libmachine: (ha-300623) DBG | private KVM network mk-ha-300623 192.168.39.0/24 created
	I1026 00:59:41.258172   27934 main.go:141] libmachine: (ha-300623) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.258186   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.258104   27957 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.258203   27934 main.go:141] libmachine: (ha-300623) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:59:41.258226   27934 main.go:141] libmachine: (ha-300623) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:59:41.511971   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.511837   27957 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa...
	I1026 00:59:41.679961   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679835   27957 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk...
	I1026 00:59:41.680008   27934 main.go:141] libmachine: (ha-300623) DBG | Writing magic tar header
	I1026 00:59:41.680023   27934 main.go:141] libmachine: (ha-300623) DBG | Writing SSH key tar header
	I1026 00:59:41.680037   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679951   27957 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.680109   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623
	I1026 00:59:41.680139   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:59:41.680156   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 (perms=drwx------)
	I1026 00:59:41.680166   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.680185   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:59:41.680194   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:59:41.680209   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:59:41.680219   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home
	I1026 00:59:41.680230   27934 main.go:141] libmachine: (ha-300623) DBG | Skipping /home - not owner
	I1026 00:59:41.680244   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:59:41.680257   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:59:41.680313   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:59:41.680344   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:59:41.680359   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:59:41.680367   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:41.681340   27934 main.go:141] libmachine: (ha-300623) define libvirt domain using xml: 
	I1026 00:59:41.681362   27934 main.go:141] libmachine: (ha-300623) <domain type='kvm'>
	I1026 00:59:41.681370   27934 main.go:141] libmachine: (ha-300623)   <name>ha-300623</name>
	I1026 00:59:41.681381   27934 main.go:141] libmachine: (ha-300623)   <memory unit='MiB'>2200</memory>
	I1026 00:59:41.681403   27934 main.go:141] libmachine: (ha-300623)   <vcpu>2</vcpu>
	I1026 00:59:41.681438   27934 main.go:141] libmachine: (ha-300623)   <features>
	I1026 00:59:41.681448   27934 main.go:141] libmachine: (ha-300623)     <acpi/>
	I1026 00:59:41.681452   27934 main.go:141] libmachine: (ha-300623)     <apic/>
	I1026 00:59:41.681457   27934 main.go:141] libmachine: (ha-300623)     <pae/>
	I1026 00:59:41.681471   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681479   27934 main.go:141] libmachine: (ha-300623)   </features>
	I1026 00:59:41.681484   27934 main.go:141] libmachine: (ha-300623)   <cpu mode='host-passthrough'>
	I1026 00:59:41.681489   27934 main.go:141] libmachine: (ha-300623)   
	I1026 00:59:41.681494   27934 main.go:141] libmachine: (ha-300623)   </cpu>
	I1026 00:59:41.681500   27934 main.go:141] libmachine: (ha-300623)   <os>
	I1026 00:59:41.681504   27934 main.go:141] libmachine: (ha-300623)     <type>hvm</type>
	I1026 00:59:41.681512   27934 main.go:141] libmachine: (ha-300623)     <boot dev='cdrom'/>
	I1026 00:59:41.681520   27934 main.go:141] libmachine: (ha-300623)     <boot dev='hd'/>
	I1026 00:59:41.681528   27934 main.go:141] libmachine: (ha-300623)     <bootmenu enable='no'/>
	I1026 00:59:41.681532   27934 main.go:141] libmachine: (ha-300623)   </os>
	I1026 00:59:41.681539   27934 main.go:141] libmachine: (ha-300623)   <devices>
	I1026 00:59:41.681544   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='cdrom'>
	I1026 00:59:41.681575   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/boot2docker.iso'/>
	I1026 00:59:41.681594   27934 main.go:141] libmachine: (ha-300623)       <target dev='hdc' bus='scsi'/>
	I1026 00:59:41.681606   27934 main.go:141] libmachine: (ha-300623)       <readonly/>
	I1026 00:59:41.681615   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681625   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='disk'>
	I1026 00:59:41.681635   27934 main.go:141] libmachine: (ha-300623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:59:41.681651   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk'/>
	I1026 00:59:41.681664   27934 main.go:141] libmachine: (ha-300623)       <target dev='hda' bus='virtio'/>
	I1026 00:59:41.681675   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681686   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681698   27934 main.go:141] libmachine: (ha-300623)       <source network='mk-ha-300623'/>
	I1026 00:59:41.681709   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681719   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681734   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681746   27934 main.go:141] libmachine: (ha-300623)       <source network='default'/>
	I1026 00:59:41.681756   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681773   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681784   27934 main.go:141] libmachine: (ha-300623)     <serial type='pty'>
	I1026 00:59:41.681794   27934 main.go:141] libmachine: (ha-300623)       <target port='0'/>
	I1026 00:59:41.681803   27934 main.go:141] libmachine: (ha-300623)     </serial>
	I1026 00:59:41.681813   27934 main.go:141] libmachine: (ha-300623)     <console type='pty'>
	I1026 00:59:41.681823   27934 main.go:141] libmachine: (ha-300623)       <target type='serial' port='0'/>
	I1026 00:59:41.681835   27934 main.go:141] libmachine: (ha-300623)     </console>
	I1026 00:59:41.681847   27934 main.go:141] libmachine: (ha-300623)     <rng model='virtio'>
	I1026 00:59:41.681861   27934 main.go:141] libmachine: (ha-300623)       <backend model='random'>/dev/random</backend>
	I1026 00:59:41.681876   27934 main.go:141] libmachine: (ha-300623)     </rng>
	I1026 00:59:41.681884   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681893   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681902   27934 main.go:141] libmachine: (ha-300623)   </devices>
	I1026 00:59:41.681910   27934 main.go:141] libmachine: (ha-300623) </domain>
	I1026 00:59:41.681919   27934 main.go:141] libmachine: (ha-300623) 
	I1026 00:59:41.685794   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:bc:3c:c8 in network default
	I1026 00:59:41.686289   27934 main.go:141] libmachine: (ha-300623) Ensuring networks are active...
	I1026 00:59:41.686312   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:41.686908   27934 main.go:141] libmachine: (ha-300623) Ensuring network default is active
	I1026 00:59:41.687318   27934 main.go:141] libmachine: (ha-300623) Ensuring network mk-ha-300623 is active
	I1026 00:59:41.687714   27934 main.go:141] libmachine: (ha-300623) Getting domain xml...
	I1026 00:59:41.688278   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:42.865174   27934 main.go:141] libmachine: (ha-300623) Waiting to get IP...
	I1026 00:59:42.866030   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:42.866436   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:42.866478   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:42.866424   27957 retry.go:31] will retry after 310.395452ms: waiting for machine to come up
	I1026 00:59:43.178911   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.179377   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.179517   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.179326   27957 retry.go:31] will retry after 258.757335ms: waiting for machine to come up
	I1026 00:59:43.439460   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.439855   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.439883   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.439810   27957 retry.go:31] will retry after 476.137443ms: waiting for machine to come up
	I1026 00:59:43.917472   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.917875   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.917910   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.917853   27957 retry.go:31] will retry after 411.866237ms: waiting for machine to come up
	I1026 00:59:44.331261   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.331762   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.331800   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.331724   27957 retry.go:31] will retry after 639.236783ms: waiting for machine to come up
	I1026 00:59:44.972039   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.972415   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.972443   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.972363   27957 retry.go:31] will retry after 943.318782ms: waiting for machine to come up
	I1026 00:59:45.917370   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:45.917808   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:45.917870   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:45.917775   27957 retry.go:31] will retry after 1.007000764s: waiting for machine to come up
	I1026 00:59:46.926545   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:46.926930   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:46.926955   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:46.926890   27957 retry.go:31] will retry after 905.175073ms: waiting for machine to come up
	I1026 00:59:47.834112   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:47.834468   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:47.834505   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:47.834452   27957 retry.go:31] will retry after 1.696390131s: waiting for machine to come up
	I1026 00:59:49.533204   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:49.533596   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:49.533625   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:49.533577   27957 retry.go:31] will retry after 2.087564363s: waiting for machine to come up
	I1026 00:59:51.622505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:51.622952   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:51.623131   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:51.622900   27957 retry.go:31] will retry after 2.813881441s: waiting for machine to come up
	I1026 00:59:54.439730   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:54.440081   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:54.440111   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:54.440045   27957 retry.go:31] will retry after 2.560428672s: waiting for machine to come up
	I1026 00:59:57.002066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:57.002394   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:57.002424   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:57.002352   27957 retry.go:31] will retry after 3.377744145s: waiting for machine to come up
	I1026 01:00:00.384015   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384460   27934 main.go:141] libmachine: (ha-300623) Found IP for machine: 192.168.39.183
	I1026 01:00:00.384479   27934 main.go:141] libmachine: (ha-300623) Reserving static IP address...
	I1026 01:00:00.384505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has current primary IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384856   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find host DHCP lease matching {name: "ha-300623", mac: "52:54:00:4d:a0:46", ip: "192.168.39.183"} in network mk-ha-300623
	I1026 01:00:00.455221   27934 main.go:141] libmachine: (ha-300623) DBG | Getting to WaitForSSH function...
	I1026 01:00:00.455245   27934 main.go:141] libmachine: (ha-300623) Reserved static IP address: 192.168.39.183
	I1026 01:00:00.455253   27934 main.go:141] libmachine: (ha-300623) Waiting for SSH to be available...
	I1026 01:00:00.457760   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458200   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.458223   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458402   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH client type: external
	I1026 01:00:00.458428   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa (-rw-------)
	I1026 01:00:00.458460   27934 main.go:141] libmachine: (ha-300623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:00.458475   27934 main.go:141] libmachine: (ha-300623) DBG | About to run SSH command:
	I1026 01:00:00.458487   27934 main.go:141] libmachine: (ha-300623) DBG | exit 0
	I1026 01:00:00.585473   27934 main.go:141] libmachine: (ha-300623) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:00.585717   27934 main.go:141] libmachine: (ha-300623) KVM machine creation complete!
	I1026 01:00:00.586041   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:00.586564   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586735   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586856   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:00.586870   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:00.588144   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:00.588156   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:00.588161   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:00.588166   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.590434   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590800   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.590815   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590958   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.591118   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591416   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.591579   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.591799   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.591812   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:00.700544   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:00.700568   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:00.700586   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.703305   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703686   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.703708   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703827   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.704016   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704163   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704286   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.704450   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.704607   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.704617   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:00.813937   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:00.814027   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:00.814042   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:00.814078   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814305   27934 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:00:00.814333   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814495   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.817076   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817394   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.817438   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817578   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.817764   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.817892   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.818015   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.818165   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.818334   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.818344   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:00:00.943069   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:00:00.943097   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.946005   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946325   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.946354   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946524   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.946840   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947004   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947144   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.947328   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.947549   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.947572   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:01.065899   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:01.065958   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:01.066012   27934 buildroot.go:174] setting up certificates
	I1026 01:00:01.066027   27934 provision.go:84] configureAuth start
	I1026 01:00:01.066042   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:01.066285   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.069069   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069397   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.069440   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069574   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.071665   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072025   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.072053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072211   27934 provision.go:143] copyHostCerts
	I1026 01:00:01.072292   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072346   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:01.072359   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072430   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:01.072514   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:01.072540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072577   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:01.072670   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072703   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:01.072711   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072743   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:01.072808   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:00:01.133729   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:01.133783   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:01.133804   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.136311   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136591   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.136617   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136770   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.136937   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.137059   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.137192   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.222921   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:01.222983   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:01.245372   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:01.245444   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:00:01.267891   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:01.267957   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:00:01.289667   27934 provision.go:87] duration metric: took 223.628307ms to configureAuth
	I1026 01:00:01.289699   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:01.289880   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:01.289953   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.292672   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.292982   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.293012   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.293184   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.293375   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293624   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293732   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.293904   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.294111   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.294137   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:01.522070   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:01.522096   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:01.522103   27934 main.go:141] libmachine: (ha-300623) Calling .GetURL
	I1026 01:00:01.523378   27934 main.go:141] libmachine: (ha-300623) DBG | Using libvirt version 6000000
	I1026 01:00:01.525286   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525641   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.525670   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525803   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:01.525822   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:01.525829   27934 client.go:171] duration metric: took 20.337349207s to LocalClient.Create
	I1026 01:00:01.525853   27934 start.go:167] duration metric: took 20.337416513s to libmachine.API.Create "ha-300623"
	I1026 01:00:01.525867   27934 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:00:01.525878   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:01.525899   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.526150   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:01.526178   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.528275   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528583   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.528614   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528742   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.528907   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.529035   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.529169   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.615528   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:01.619526   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:01.619547   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:01.619607   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:01.619676   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:01.619685   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:01.619772   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:01.628818   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:01.651055   27934 start.go:296] duration metric: took 125.175871ms for postStartSetup
	I1026 01:00:01.651106   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:01.651707   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.654048   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654337   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.654358   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654637   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:01.654812   27934 start.go:128] duration metric: took 20.484504528s to createHost
	I1026 01:00:01.654833   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.656877   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657252   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.657277   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657399   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.657609   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657759   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.657999   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.658194   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.658205   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:01.770028   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904401.731044736
	
	I1026 01:00:01.770051   27934 fix.go:216] guest clock: 1729904401.731044736
	I1026 01:00:01.770074   27934 fix.go:229] Guest: 2024-10-26 01:00:01.731044736 +0000 UTC Remote: 2024-10-26 01:00:01.654822884 +0000 UTC m=+20.590184391 (delta=76.221852ms)
	I1026 01:00:01.770101   27934 fix.go:200] guest clock delta is within tolerance: 76.221852ms
	I1026 01:00:01.770108   27934 start.go:83] releasing machines lock for "ha-300623", held for 20.599868049s
	I1026 01:00:01.770184   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.770452   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.772669   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773035   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.773066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773320   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773757   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773942   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.774055   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:01.774095   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.774157   27934 ssh_runner.go:195] Run: cat /version.json
	I1026 01:00:01.774180   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.776503   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776822   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.776846   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776862   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777013   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777160   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777266   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.777287   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777476   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777463   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.777588   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777703   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777819   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.889672   27934 ssh_runner.go:195] Run: systemctl --version
	I1026 01:00:01.895441   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:02.062750   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:02.068559   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:02.068640   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:02.085755   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:02.085784   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:02.085879   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:02.103715   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:02.116629   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:02.116698   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:02.129921   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:02.143297   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:02.262539   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:02.410776   27934 docker.go:233] disabling docker service ...
	I1026 01:00:02.410852   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:02.425252   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:02.438874   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:02.567343   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:02.692382   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:02.705780   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:02.723128   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:02.723196   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.733126   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:02.733204   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.743104   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.752720   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.762245   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:02.772039   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.781522   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.797499   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.807723   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:02.816764   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:02.816838   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:02.830364   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:02.840309   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:02.959488   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:03.048870   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:03.048952   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:03.053750   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:03.053801   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:03.057147   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:03.096489   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:03.096564   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.124313   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.153078   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:03.154469   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:03.157053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157290   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:03.157320   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157571   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:03.161502   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:03.173922   27934 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:00:03.174024   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:03.174067   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:03.205502   27934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 01:00:03.205563   27934 ssh_runner.go:195] Run: which lz4
	I1026 01:00:03.209242   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1026 01:00:03.209334   27934 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:00:03.213268   27934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:00:03.213294   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 01:00:04.450368   27934 crio.go:462] duration metric: took 1.241064009s to copy over tarball
	I1026 01:00:04.450448   27934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:00:06.473538   27934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023056026s)
	I1026 01:00:06.473572   27934 crio.go:469] duration metric: took 2.023171959s to extract the tarball
	I1026 01:00:06.473605   27934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:00:06.509382   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:06.550351   27934 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:00:06.550371   27934 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:00:06.550379   27934 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:00:06.550479   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:06.550540   27934 ssh_runner.go:195] Run: crio config
	I1026 01:00:06.601899   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:06.601920   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:06.601928   27934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:00:06.601953   27934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:00:06.602065   27934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:00:06.602090   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:06.602134   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:06.618905   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:06.619004   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:06.619054   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:06.628422   27934 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:00:06.628482   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:00:06.637507   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:00:06.653506   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:06.669385   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:00:06.685316   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1026 01:00:06.701298   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:06.704780   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:06.716358   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:06.835294   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:06.851617   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:00:06.851643   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:06.851663   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:06.851825   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:06.851928   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:06.851951   27934 certs.go:256] generating profile certs ...
	I1026 01:00:06.852032   27934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:06.852053   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt with IP's: []
	I1026 01:00:07.025844   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt ...
	I1026 01:00:07.025878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt: {Name:mk0969781384c8eb24d904330417d9f7d1f6988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026073   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key ...
	I1026 01:00:07.026087   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key: {Name:mkbd66f66cfdc11b06ed7ee27efeab2c35691371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026190   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a
	I1026 01:00:07.026206   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1026 01:00:07.091648   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a ...
	I1026 01:00:07.091676   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a: {Name:mk79ee9c8c68f427992ae46daac972e5a80d39e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091862   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a ...
	I1026 01:00:07.091878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a: {Name:mk0161ea9da0d9d1941870c52b97be187bff2c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091976   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:07.092075   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:07.092130   27934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:07.092145   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt with IP's: []
	I1026 01:00:07.288723   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt ...
	I1026 01:00:07.288754   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt: {Name:mka585c80540dcf4447ce80873c4b4204a6ac833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.288941   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key ...
	I1026 01:00:07.288955   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key: {Name:mk2a46d0d0037729eebdc4ee5998eb5ddbae3abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.289048   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:07.289071   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:07.289091   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:07.289110   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:07.289128   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:07.289145   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:07.289157   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:07.289174   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:07.289238   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:07.289301   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:07.289321   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:07.289357   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:07.289389   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:07.289437   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:07.289497   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:07.289533   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.289554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.289572   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.290185   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:07.315249   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:07.338589   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:07.361991   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:07.385798   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:00:07.409069   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:00:07.431845   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:07.454880   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:07.477392   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:07.500857   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:07.523684   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:07.546154   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:00:07.562082   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:07.567710   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:07.578511   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582871   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582924   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.588401   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:07.601567   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:07.628525   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634748   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634819   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.643756   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:07.657734   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:07.668305   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672451   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672508   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.677939   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:07.688219   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:07.691924   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:07.691988   27934 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:07.692059   27934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:00:07.692137   27934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:00:07.731345   27934 cri.go:89] found id: ""
	I1026 01:00:07.731417   27934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:00:07.741208   27934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:00:07.750623   27934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:00:07.760311   27934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:00:07.760340   27934 kubeadm.go:157] found existing configuration files:
	
	I1026 01:00:07.760383   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:00:07.769207   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:00:07.769267   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:00:07.778578   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:00:07.787579   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:00:07.787661   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:00:07.797042   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.805955   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:00:07.806016   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.815274   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:00:07.824206   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:00:07.824269   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:00:07.833410   27934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:00:07.938802   27934 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 01:00:07.938923   27934 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:00:08.028635   27934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:00:08.028791   27934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:00:08.028932   27934 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 01:00:08.038844   27934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:00:08.041881   27934 out.go:235]   - Generating certificates and keys ...
	I1026 01:00:08.042903   27934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:00:08.042973   27934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:00:08.315204   27934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:00:08.725495   27934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:00:08.806960   27934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:00:08.984098   27934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:00:09.149484   27934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:00:09.149653   27934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.309448   27934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:00:09.309592   27934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.556294   27934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:00:09.712766   27934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:00:10.018193   27934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:00:10.018258   27934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:00:10.257230   27934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:00:10.645833   27934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 01:00:10.887377   27934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:00:11.179208   27934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:00:11.353056   27934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:00:11.353655   27934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:00:11.356992   27934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:00:11.358796   27934 out.go:235]   - Booting up control plane ...
	I1026 01:00:11.358907   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:00:11.358983   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:00:11.359320   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:00:11.375691   27934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:00:11.384224   27934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:00:11.384282   27934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:00:11.520735   27934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 01:00:11.520904   27934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 01:00:12.022375   27934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.622573ms
	I1026 01:00:12.022456   27934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 01:00:18.050317   27934 kubeadm.go:310] [api-check] The API server is healthy after 6.027294666s
	I1026 01:00:18.065132   27934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:00:18.091049   27934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:00:18.625277   27934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:00:18.625502   27934 kubeadm.go:310] [mark-control-plane] Marking the node ha-300623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:00:18.641286   27934 kubeadm.go:310] [bootstrap-token] Using token: 0x0agx.12z45ob3hq7so0d8
	I1026 01:00:18.642941   27934 out.go:235]   - Configuring RBAC rules ...
	I1026 01:00:18.643084   27934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:00:18.651507   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:00:18.661575   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:00:18.665545   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:00:18.669512   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:00:18.677272   27934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:00:18.691190   27934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:00:18.958591   27934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 01:00:19.464064   27934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 01:00:19.464088   27934 kubeadm.go:310] 
	I1026 01:00:19.464204   27934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 01:00:19.464225   27934 kubeadm.go:310] 
	I1026 01:00:19.464365   27934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 01:00:19.464377   27934 kubeadm.go:310] 
	I1026 01:00:19.464406   27934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 01:00:19.464485   27934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:00:19.464567   27934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:00:19.464579   27934 kubeadm.go:310] 
	I1026 01:00:19.464644   27934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 01:00:19.464655   27934 kubeadm.go:310] 
	I1026 01:00:19.464719   27934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:00:19.464726   27934 kubeadm.go:310] 
	I1026 01:00:19.464814   27934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 01:00:19.464930   27934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:00:19.465024   27934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:00:19.465033   27934 kubeadm.go:310] 
	I1026 01:00:19.465247   27934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:00:19.465347   27934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 01:00:19.465355   27934 kubeadm.go:310] 
	I1026 01:00:19.465464   27934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.465592   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 01:00:19.465626   27934 kubeadm.go:310] 	--control-plane 
	I1026 01:00:19.465634   27934 kubeadm.go:310] 
	I1026 01:00:19.465757   27934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:00:19.465771   27934 kubeadm.go:310] 
	I1026 01:00:19.465887   27934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.466042   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 01:00:19.466324   27934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:00:19.466354   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:19.466370   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:19.468090   27934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:00:19.469492   27934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:00:19.474603   27934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 01:00:19.474628   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 01:00:19.493103   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:00:19.838794   27934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:00:19.838909   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:19.838923   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623 minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=true
	I1026 01:00:19.860886   27934 ops.go:34] apiserver oom_adj: -16
	I1026 01:00:19.991866   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.492140   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.992964   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.492707   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.992237   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.491957   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.992426   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.492181   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.615897   27934 kubeadm.go:1113] duration metric: took 3.777077904s to wait for elevateKubeSystemPrivileges
	I1026 01:00:23.615938   27934 kubeadm.go:394] duration metric: took 15.923953549s to StartCluster
	I1026 01:00:23.615966   27934 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.616076   27934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.616984   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.617268   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:00:23.617267   27934 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:23.617376   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:00:23.617295   27934 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:00:23.617401   27934 addons.go:69] Setting storage-provisioner=true in profile "ha-300623"
	I1026 01:00:23.617447   27934 addons.go:234] Setting addon storage-provisioner=true in "ha-300623"
	I1026 01:00:23.617472   27934 addons.go:69] Setting default-storageclass=true in profile "ha-300623"
	I1026 01:00:23.617485   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.617498   27934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-300623"
	I1026 01:00:23.617505   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:23.617969   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618010   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.618031   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618073   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.633825   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I1026 01:00:23.633917   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1026 01:00:23.634401   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634846   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634864   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.634968   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634988   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.635198   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635332   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635386   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.635834   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.635876   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.637603   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.637812   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:00:23.638218   27934 cert_rotation.go:140] Starting client certificate rotation controller
	I1026 01:00:23.638343   27934 addons.go:234] Setting addon default-storageclass=true in "ha-300623"
	I1026 01:00:23.638387   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.638626   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.638653   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.651480   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I1026 01:00:23.651965   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.652480   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.652510   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.652799   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.652991   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.653021   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I1026 01:00:23.654147   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.654693   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.654718   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.654832   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.655239   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.655791   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.655841   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.656920   27934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:00:23.658814   27934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.658834   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:00:23.658853   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.662101   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662598   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.662632   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662848   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.663049   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.663200   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.663316   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.671976   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 01:00:23.672433   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.672925   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.672950   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.673249   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.673483   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.675058   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.675265   27934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:23.675282   27934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:00:23.675298   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.678185   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678589   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.678611   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678792   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.678957   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.679108   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.679249   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.762178   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:00:23.824448   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.874821   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:24.116804   27934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 01:00:24.301862   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301884   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.301919   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301937   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302185   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302194   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302193   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302200   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302221   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302229   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302239   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302246   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302447   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302464   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302531   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302526   27934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 01:00:24.302571   27934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 01:00:24.302606   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302631   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302680   27934 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1026 01:00:24.302699   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.302706   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.302710   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315108   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:00:24.315658   27934 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1026 01:00:24.315672   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.315679   27934 round_trippers.go:473]     Content-Type: application/json
	I1026 01:00:24.315683   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315686   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.318571   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:00:24.318791   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.318805   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.319072   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.319089   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.319093   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.321441   27934 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:00:24.323036   27934 addons.go:510] duration metric: took 705.743688ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 01:00:24.323074   27934 start.go:246] waiting for cluster config update ...
	I1026 01:00:24.323088   27934 start.go:255] writing updated cluster config ...
	I1026 01:00:24.324580   27934 out.go:201] 
	I1026 01:00:24.325800   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:24.325876   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.327345   27934 out.go:177] * Starting "ha-300623-m02" control-plane node in "ha-300623" cluster
	I1026 01:00:24.329009   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:24.329028   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:00:24.329124   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:00:24.329138   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:00:24.329209   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.329375   27934 start.go:360] acquireMachinesLock for ha-300623-m02: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:00:24.329429   27934 start.go:364] duration metric: took 35.088µs to acquireMachinesLock for "ha-300623-m02"
	I1026 01:00:24.329452   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:24.329544   27934 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1026 01:00:24.330943   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:00:24.331025   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:24.331057   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:24.345495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I1026 01:00:24.346002   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:24.346476   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:24.346491   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:24.346765   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:24.346970   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:24.347113   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:24.347293   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:00:24.347323   27934 client.go:168] LocalClient.Create starting
	I1026 01:00:24.347359   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:00:24.347400   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347421   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347493   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:00:24.347519   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347536   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347559   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:00:24.347568   27934 main.go:141] libmachine: (ha-300623-m02) Calling .PreCreateCheck
	I1026 01:00:24.347721   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:24.348120   27934 main.go:141] libmachine: Creating machine...
	I1026 01:00:24.348135   27934 main.go:141] libmachine: (ha-300623-m02) Calling .Create
	I1026 01:00:24.348260   27934 main.go:141] libmachine: (ha-300623-m02) Creating KVM machine...
	I1026 01:00:24.349505   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing default KVM network
	I1026 01:00:24.349630   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing private KVM network mk-ha-300623
	I1026 01:00:24.349770   27934 main.go:141] libmachine: (ha-300623-m02) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.349806   27934 main.go:141] libmachine: (ha-300623-m02) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:00:24.349877   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.349757   28306 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.349949   27934 main.go:141] libmachine: (ha-300623-m02) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:00:24.581858   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.581729   28306 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa...
	I1026 01:00:24.824457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824338   28306 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk...
	I1026 01:00:24.824488   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing magic tar header
	I1026 01:00:24.824501   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing SSH key tar header
	I1026 01:00:24.824514   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824445   28306 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.824563   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02
	I1026 01:00:24.824601   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:00:24.824632   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.824643   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 (perms=drwx------)
	I1026 01:00:24.824650   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:00:24.824656   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:00:24.824665   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:00:24.824671   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:00:24.824679   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:00:24.824685   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:24.824694   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:00:24.824702   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:00:24.824707   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:00:24.824717   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home
	I1026 01:00:24.824748   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Skipping /home - not owner
	I1026 01:00:24.825705   27934 main.go:141] libmachine: (ha-300623-m02) define libvirt domain using xml: 
	I1026 01:00:24.825725   27934 main.go:141] libmachine: (ha-300623-m02) <domain type='kvm'>
	I1026 01:00:24.825740   27934 main.go:141] libmachine: (ha-300623-m02)   <name>ha-300623-m02</name>
	I1026 01:00:24.825751   27934 main.go:141] libmachine: (ha-300623-m02)   <memory unit='MiB'>2200</memory>
	I1026 01:00:24.825760   27934 main.go:141] libmachine: (ha-300623-m02)   <vcpu>2</vcpu>
	I1026 01:00:24.825769   27934 main.go:141] libmachine: (ha-300623-m02)   <features>
	I1026 01:00:24.825777   27934 main.go:141] libmachine: (ha-300623-m02)     <acpi/>
	I1026 01:00:24.825786   27934 main.go:141] libmachine: (ha-300623-m02)     <apic/>
	I1026 01:00:24.825807   27934 main.go:141] libmachine: (ha-300623-m02)     <pae/>
	I1026 01:00:24.825825   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.825837   27934 main.go:141] libmachine: (ha-300623-m02)   </features>
	I1026 01:00:24.825845   27934 main.go:141] libmachine: (ha-300623-m02)   <cpu mode='host-passthrough'>
	I1026 01:00:24.825850   27934 main.go:141] libmachine: (ha-300623-m02)   
	I1026 01:00:24.825856   27934 main.go:141] libmachine: (ha-300623-m02)   </cpu>
	I1026 01:00:24.825861   27934 main.go:141] libmachine: (ha-300623-m02)   <os>
	I1026 01:00:24.825868   27934 main.go:141] libmachine: (ha-300623-m02)     <type>hvm</type>
	I1026 01:00:24.825873   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='cdrom'/>
	I1026 01:00:24.825880   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='hd'/>
	I1026 01:00:24.825888   27934 main.go:141] libmachine: (ha-300623-m02)     <bootmenu enable='no'/>
	I1026 01:00:24.825901   27934 main.go:141] libmachine: (ha-300623-m02)   </os>
	I1026 01:00:24.825911   27934 main.go:141] libmachine: (ha-300623-m02)   <devices>
	I1026 01:00:24.825922   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='cdrom'>
	I1026 01:00:24.825934   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/boot2docker.iso'/>
	I1026 01:00:24.825942   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hdc' bus='scsi'/>
	I1026 01:00:24.825947   27934 main.go:141] libmachine: (ha-300623-m02)       <readonly/>
	I1026 01:00:24.825955   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.825960   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='disk'>
	I1026 01:00:24.825967   27934 main.go:141] libmachine: (ha-300623-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:00:24.825975   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk'/>
	I1026 01:00:24.825984   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hda' bus='virtio'/>
	I1026 01:00:24.825991   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.826012   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826033   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='mk-ha-300623'/>
	I1026 01:00:24.826045   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826054   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826063   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826074   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='default'/>
	I1026 01:00:24.826082   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826091   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826098   27934 main.go:141] libmachine: (ha-300623-m02)     <serial type='pty'>
	I1026 01:00:24.826107   27934 main.go:141] libmachine: (ha-300623-m02)       <target port='0'/>
	I1026 01:00:24.826112   27934 main.go:141] libmachine: (ha-300623-m02)     </serial>
	I1026 01:00:24.826119   27934 main.go:141] libmachine: (ha-300623-m02)     <console type='pty'>
	I1026 01:00:24.826136   27934 main.go:141] libmachine: (ha-300623-m02)       <target type='serial' port='0'/>
	I1026 01:00:24.826153   27934 main.go:141] libmachine: (ha-300623-m02)     </console>
	I1026 01:00:24.826166   27934 main.go:141] libmachine: (ha-300623-m02)     <rng model='virtio'>
	I1026 01:00:24.826178   27934 main.go:141] libmachine: (ha-300623-m02)       <backend model='random'>/dev/random</backend>
	I1026 01:00:24.826187   27934 main.go:141] libmachine: (ha-300623-m02)     </rng>
	I1026 01:00:24.826194   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826201   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826210   27934 main.go:141] libmachine: (ha-300623-m02)   </devices>
	I1026 01:00:24.826218   27934 main.go:141] libmachine: (ha-300623-m02) </domain>
	I1026 01:00:24.826230   27934 main.go:141] libmachine: (ha-300623-m02) 
	I1026 01:00:24.834328   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:19:9b:85 in network default
	I1026 01:00:24.834898   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring networks are active...
	I1026 01:00:24.834921   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:24.835679   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network default is active
	I1026 01:00:24.836033   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network mk-ha-300623 is active
	I1026 01:00:24.836422   27934 main.go:141] libmachine: (ha-300623-m02) Getting domain xml...
	I1026 01:00:24.837184   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:26.123801   27934 main.go:141] libmachine: (ha-300623-m02) Waiting to get IP...
	I1026 01:00:26.124786   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.125171   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.125213   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.125161   28306 retry.go:31] will retry after 239.473798ms: waiting for machine to come up
	I1026 01:00:26.366497   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.367035   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.367063   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.366991   28306 retry.go:31] will retry after 247.775109ms: waiting for machine to come up
	I1026 01:00:26.616299   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.616749   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.616770   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.616730   28306 retry.go:31] will retry after 304.793231ms: waiting for machine to come up
	I1026 01:00:26.923149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.923677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.923696   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.923618   28306 retry.go:31] will retry after 501.966284ms: waiting for machine to come up
	I1026 01:00:27.427149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.427595   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.427620   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.427557   28306 retry.go:31] will retry after 462.793286ms: waiting for machine to come up
	I1026 01:00:27.892113   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.892649   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.892674   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.892601   28306 retry.go:31] will retry after 627.280628ms: waiting for machine to come up
	I1026 01:00:28.521634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:28.522118   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:28.522154   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:28.522059   28306 retry.go:31] will retry after 1.043043357s: waiting for machine to come up
	I1026 01:00:29.566267   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:29.566670   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:29.566697   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:29.566641   28306 retry.go:31] will retry after 925.497125ms: waiting for machine to come up
	I1026 01:00:30.493367   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:30.493801   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:30.493826   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:30.493760   28306 retry.go:31] will retry after 1.604522192s: waiting for machine to come up
	I1026 01:00:32.100432   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:32.100961   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:32.100982   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:32.100919   28306 retry.go:31] will retry after 2.197958234s: waiting for machine to come up
	I1026 01:00:34.301338   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:34.301864   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:34.301891   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:34.301813   28306 retry.go:31] will retry after 1.917554174s: waiting for machine to come up
	I1026 01:00:36.221440   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:36.221869   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:36.221888   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:36.221830   28306 retry.go:31] will retry after 3.272341592s: waiting for machine to come up
	I1026 01:00:39.496057   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:39.496525   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:39.496555   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:39.496473   28306 retry.go:31] will retry after 3.688097346s: waiting for machine to come up
	I1026 01:00:43.186914   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:43.187251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:43.187284   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:43.187241   28306 retry.go:31] will retry after 5.370855346s: waiting for machine to come up
	I1026 01:00:48.563319   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563799   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563826   27934 main.go:141] libmachine: (ha-300623-m02) Found IP for machine: 192.168.39.62
	I1026 01:00:48.563869   27934 main.go:141] libmachine: (ha-300623-m02) Reserving static IP address...
	I1026 01:00:48.564263   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find host DHCP lease matching {name: "ha-300623-m02", mac: "52:54:00:eb:f2:95", ip: "192.168.39.62"} in network mk-ha-300623
	I1026 01:00:48.642625   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Getting to WaitForSSH function...
	I1026 01:00:48.642658   27934 main.go:141] libmachine: (ha-300623-m02) Reserved static IP address: 192.168.39.62
	I1026 01:00:48.642673   27934 main.go:141] libmachine: (ha-300623-m02) Waiting for SSH to be available...
	I1026 01:00:48.645214   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645726   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.645751   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645908   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH client type: external
	I1026 01:00:48.645957   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa (-rw-------)
	I1026 01:00:48.645990   27934 main.go:141] libmachine: (ha-300623-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:48.646004   27934 main.go:141] libmachine: (ha-300623-m02) DBG | About to run SSH command:
	I1026 01:00:48.646022   27934 main.go:141] libmachine: (ha-300623-m02) DBG | exit 0
	I1026 01:00:48.773437   27934 main.go:141] libmachine: (ha-300623-m02) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:48.773671   27934 main.go:141] libmachine: (ha-300623-m02) KVM machine creation complete!
	I1026 01:00:48.773985   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:48.774531   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774718   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774839   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:48.774863   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetState
	I1026 01:00:48.776153   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:48.776168   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:48.776176   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:48.776184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.778481   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778857   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.778884   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778991   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.779164   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779300   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779402   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.779538   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.779788   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.779807   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:48.896727   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:48.896751   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:48.896762   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.899398   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899741   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.899779   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899885   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.900047   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900289   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.900414   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.900617   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.900631   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:49.017846   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:49.017965   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:49.017981   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:49.017993   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018219   27934 buildroot.go:166] provisioning hostname "ha-300623-m02"
	I1026 01:00:49.018266   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018441   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.021311   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022133   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.022168   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022362   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.022542   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022691   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022833   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.022971   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.023157   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.023181   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m02 && echo "ha-300623-m02" | sudo tee /etc/hostname
	I1026 01:00:49.154863   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m02
	
	I1026 01:00:49.154891   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.157409   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.157924   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.157965   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.158127   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.158313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158463   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158583   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.158721   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.158874   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.158890   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:49.281279   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:49.281312   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:49.281349   27934 buildroot.go:174] setting up certificates
	I1026 01:00:49.281361   27934 provision.go:84] configureAuth start
	I1026 01:00:49.281370   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.281641   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.284261   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.284660   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284785   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.286954   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287298   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.287326   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287470   27934 provision.go:143] copyHostCerts
	I1026 01:00:49.287501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287544   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:49.287555   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287640   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:49.287745   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287775   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:49.287788   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287835   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:49.287908   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287934   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:49.287941   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287990   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:49.288059   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m02 san=[127.0.0.1 192.168.39.62 ha-300623-m02 localhost minikube]
	I1026 01:00:49.407467   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:49.407520   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:49.407552   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.410082   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410436   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.410457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410696   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.410880   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.411041   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.411166   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.495389   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:49.495471   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:49.520501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:49.520571   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:00:49.544170   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:49.544266   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:00:49.567939   27934 provision.go:87] duration metric: took 286.565797ms to configureAuth
	I1026 01:00:49.567967   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:49.568139   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:49.568207   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.570619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.570975   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.571000   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.571206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.571396   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571565   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571706   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.571875   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.572093   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.572115   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:49.802107   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:49.802136   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:49.802143   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetURL
	I1026 01:00:49.803331   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using libvirt version 6000000
	I1026 01:00:49.805234   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805565   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.805594   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805716   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:49.805729   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:49.805746   27934 client.go:171] duration metric: took 25.458413075s to LocalClient.Create
	I1026 01:00:49.805769   27934 start.go:167] duration metric: took 25.45847781s to libmachine.API.Create "ha-300623"
	I1026 01:00:49.805779   27934 start.go:293] postStartSetup for "ha-300623-m02" (driver="kvm2")
	I1026 01:00:49.805791   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:49.805808   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:49.806042   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:49.806065   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.808068   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808407   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.808434   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808582   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.808773   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.808963   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.809100   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.895521   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:49.899409   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:49.899435   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:49.899514   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:49.899627   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:49.899639   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:49.899762   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:49.908849   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:49.931119   27934 start.go:296] duration metric: took 125.326962ms for postStartSetup
	I1026 01:00:49.931168   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:49.931760   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.934318   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934656   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.934677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934971   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:49.935199   27934 start.go:128] duration metric: took 25.605643958s to createHost
	I1026 01:00:49.935242   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.937348   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937642   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.937668   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937766   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.937916   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938069   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938232   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.938387   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.938577   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.938589   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:50.054126   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904450.033939767
	
	I1026 01:00:50.054149   27934 fix.go:216] guest clock: 1729904450.033939767
	I1026 01:00:50.054158   27934 fix.go:229] Guest: 2024-10-26 01:00:50.033939767 +0000 UTC Remote: 2024-10-26 01:00:49.935212743 +0000 UTC m=+68.870574304 (delta=98.727024ms)
	I1026 01:00:50.054179   27934 fix.go:200] guest clock delta is within tolerance: 98.727024ms
	I1026 01:00:50.054185   27934 start.go:83] releasing machines lock for "ha-300623-m02", held for 25.72474455s
	I1026 01:00:50.054206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.054478   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:50.057251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.057634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.057666   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.060016   27934 out.go:177] * Found network options:
	I1026 01:00:50.061125   27934 out.go:177]   - NO_PROXY=192.168.39.183
	W1026 01:00:50.062183   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.062255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062824   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062979   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.063068   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:50.063107   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	W1026 01:00:50.063196   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.063287   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:50.063313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:50.065732   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.065764   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066105   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066132   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066157   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066172   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066343   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066466   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066529   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066613   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066757   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.066776   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066891   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.300821   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:50.306327   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:50.306383   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:50.322223   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:50.322250   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:50.322315   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:50.338468   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:50.351846   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:50.351912   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:50.366331   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:50.380253   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:50.506965   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:50.668001   27934 docker.go:233] disabling docker service ...
	I1026 01:00:50.668069   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:50.682592   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:50.695962   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:50.824939   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:50.938022   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:50.952273   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:50.970167   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:50.970223   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.980486   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:50.980547   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.991006   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.001215   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.011378   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:51.021477   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.031248   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.047066   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.056669   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:51.065644   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:51.065713   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:51.077591   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:51.086612   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:51.190831   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:51.272466   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:51.272541   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:51.277536   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:51.277595   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:51.281084   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:51.316243   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:51.316339   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.344007   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.373231   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:51.374904   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:00:51.375971   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:51.378647   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.378955   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:51.378984   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.379181   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:51.383229   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:51.395396   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:00:51.395665   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:51.395979   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.396021   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.411495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I1026 01:00:51.412012   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.412465   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.412492   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.412809   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.413020   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:51.414616   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:51.414900   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.414943   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.429345   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1026 01:00:51.429857   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.430394   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.430414   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.430718   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.430932   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:51.431063   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.62
	I1026 01:00:51.431072   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:51.431085   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.431231   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:51.431297   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:51.431310   27934 certs.go:256] generating profile certs ...
	I1026 01:00:51.431379   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:51.431404   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab
	I1026 01:00:51.431417   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.254]
	I1026 01:00:51.551653   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab ...
	I1026 01:00:51.551682   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab: {Name:mk7f84df361678f6c264c35c7a54837d967e14ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551843   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab ...
	I1026 01:00:51.551855   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab: {Name:mkd389918e7eb8b1c88d8cee260e577971075312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551931   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:51.552066   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:51.552188   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:51.552202   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:51.552214   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:51.552227   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:51.552240   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:51.552251   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:51.552262   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:51.552275   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:51.552287   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:51.552335   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:51.552366   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:51.552375   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:51.552397   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:51.552420   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:51.552441   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:51.552479   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:51.552504   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:51.552517   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:51.552529   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:51.552559   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:51.555385   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555741   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:51.555776   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555946   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:51.556121   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:51.556266   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:51.556384   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:51.633868   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:00:51.638556   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:00:51.651311   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:00:51.655533   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:00:51.667970   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:00:51.671912   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:00:51.681736   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:00:51.685589   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:00:51.695314   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:00:51.699011   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:00:51.709409   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:00:51.713200   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:00:51.722473   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:51.745687   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:51.767846   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:51.789516   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:51.811259   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:00:51.833028   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:00:51.856110   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:51.879410   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:51.905258   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:51.929159   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:51.951850   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:51.976197   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:00:51.991793   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:00:52.007237   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:00:52.023097   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:00:52.038541   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:00:52.053670   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:00:52.068858   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:00:52.084534   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:52.089743   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:52.099587   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103529   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103574   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.108773   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:52.118562   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:52.128439   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132388   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132437   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.137609   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:52.147519   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:52.157786   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162186   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162230   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.167650   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:52.179201   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:52.183712   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:52.183765   27934 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.31.2 crio true true} ...
	I1026 01:00:52.183873   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:52.183908   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:52.183953   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:52.201496   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:52.201565   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:52.201625   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.212390   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:00:52.212439   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.223416   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:00:52.223436   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223483   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223536   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1026 01:00:52.223555   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1026 01:00:52.227638   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:00:52.227662   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:00:53.105621   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.105715   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.110408   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:00:53.110445   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:00:53.233007   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:00:53.274448   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.274566   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.294441   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:00:53.294487   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:00:53.654866   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:00:53.664222   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 01:00:53.679840   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:53.695653   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:00:53.711652   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:53.715553   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:53.727360   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:53.853122   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:53.869765   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:53.870266   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:53.870326   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:53.886042   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1026 01:00:53.886641   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:53.887219   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:53.887243   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:53.887613   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:53.887814   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:53.887974   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:53.888094   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:00:53.888116   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:53.891569   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892007   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:53.892034   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892213   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:53.892359   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:53.892504   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:53.892700   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:54.059992   27934 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:54.060032   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I1026 01:01:15.752497   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (21.692442996s)
	I1026 01:01:15.752534   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:01:16.303360   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m02 minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:01:16.453258   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:01:16.592863   27934 start.go:319] duration metric: took 22.704885851s to joinCluster
	I1026 01:01:16.592954   27934 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:16.593288   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:16.594650   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:01:16.596091   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:01:16.850259   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:01:16.885786   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:01:16.886030   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:01:16.886096   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:01:16.886309   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:16.886394   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:16.886406   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:16.886416   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:16.886421   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:16.901951   27934 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1026 01:01:17.386830   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.386852   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.386859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.386867   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.391117   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:17.886726   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.886752   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.886769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.886774   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.891812   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:18.386816   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.386836   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.386844   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.386849   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.389277   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:18.887322   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.887345   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.887354   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.887359   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.890950   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:18.891497   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:19.386717   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.386741   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.386752   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.386757   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.389841   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:19.886538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.886562   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.886569   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.886573   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.889883   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.386728   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.386753   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.386764   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.386770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.392483   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:20.887438   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.887464   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.887474   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.887480   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.891169   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.891590   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:21.386734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.386758   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.386770   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.386778   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.389970   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:21.886824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.886849   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.886859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.886865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.891560   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.386652   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.386674   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.386682   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.386686   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.391520   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.887482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.887508   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.887524   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.887529   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.891155   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:22.891643   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:23.387538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.387567   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.387578   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.387585   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.390499   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:23.886601   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.886627   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.886637   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.886647   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.890054   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.387524   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.387553   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.387564   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.387570   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.390618   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.886521   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.886550   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.886561   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.886567   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.889985   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.386794   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.386822   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.386831   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.386838   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.390108   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.390691   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:25.887094   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.887116   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.887124   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.887128   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.890067   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:26.387517   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.387537   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.387545   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.387550   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.391065   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:26.886664   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.886688   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.886698   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.886703   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.889958   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.386821   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.386850   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.386860   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.386865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.389901   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.886863   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.886892   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.886901   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.886904   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.890223   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.890712   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:28.387256   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.387286   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.387297   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.387304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.391313   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:28.887398   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.887423   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.887431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.887435   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.891415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.387299   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.387320   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.387328   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.387333   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.394125   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:01:29.886896   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.886918   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.886926   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.886928   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.890460   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.891101   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:30.386473   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.386494   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.386505   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.386512   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.389574   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:30.886604   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.886631   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.886640   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.886644   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.890190   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.386924   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.386949   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.386959   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.386966   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.399297   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:01:31.887213   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.887236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.887243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.887250   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.890605   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.891200   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:32.386487   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.386513   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.386523   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.386530   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.389962   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:32.886975   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.887003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.887016   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.887021   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.890088   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.386916   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.386938   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.386946   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.386950   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.390776   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.886708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.886731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.886742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.886747   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.890420   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.890962   27934 node_ready.go:49] node "ha-300623-m02" has status "Ready":"True"
	I1026 01:01:33.890985   27934 node_ready.go:38] duration metric: took 17.004659759s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:33.890996   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:33.891090   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:33.891103   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.891113   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.891118   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.895593   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:33.901510   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.901584   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:01:33.901593   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.901599   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.901603   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.904838   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.905632   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.905646   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.905653   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.905662   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.908670   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.909108   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.909125   27934 pod_ready.go:82] duration metric: took 7.593244ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909134   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909228   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:01:33.909236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.909243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.909246   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.911623   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.912324   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.912342   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.912351   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.912356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.914836   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.915526   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.915582   27934 pod_ready.go:82] duration metric: took 6.44095ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915619   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:01:33.915720   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.915730   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.915737   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.918774   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.919308   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.919323   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.919332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.919337   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.921541   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.921916   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.921932   27934 pod_ready.go:82] duration metric: took 6.293574ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921944   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921993   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:01:33.922003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.922013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.922020   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.924042   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.924574   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.924592   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.924620   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.924630   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.926627   27934 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:01:33.927009   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.927026   27934 pod_ready.go:82] duration metric: took 5.07473ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.927043   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.087429   27934 request.go:632] Waited for 160.309698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087488   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087496   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.087507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.087517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.093218   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:34.287260   27934 request.go:632] Waited for 193.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287335   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287346   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.287356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.287367   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.290680   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.291257   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.291280   27934 pod_ready.go:82] duration metric: took 364.229033ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.291293   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.487643   27934 request.go:632] Waited for 196.274187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487743   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487757   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.487769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.487776   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.490314   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:34.687266   27934 request.go:632] Waited for 196.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687319   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687325   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.687332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.687336   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.690681   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.691098   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.691116   27934 pod_ready.go:82] duration metric: took 399.816191ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.691125   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.887235   27934 request.go:632] Waited for 196.048043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887286   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887292   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.887299   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.887304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.890298   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.087251   27934 request.go:632] Waited for 196.393455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087304   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087311   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.087320   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.087327   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.096042   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:01:35.096481   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.096497   27934 pod_ready.go:82] duration metric: took 405.365113ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.096507   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.287575   27934 request.go:632] Waited for 190.95439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287635   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287641   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.287656   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.287664   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.290956   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.486850   27934 request.go:632] Waited for 195.295178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486901   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486907   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.486914   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.486918   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.489791   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.490490   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.490509   27934 pod_ready.go:82] duration metric: took 393.992807ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.490519   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.687677   27934 request.go:632] Waited for 197.085878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687739   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.687747   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.687751   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.690861   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.886824   27934 request.go:632] Waited for 195.303807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886902   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886908   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.886915   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.886919   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.890003   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.890588   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.890610   27934 pod_ready.go:82] duration metric: took 400.083533ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.890620   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.087724   27934 request.go:632] Waited for 197.035019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087799   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087807   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.087817   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.087823   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.090987   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.287060   27934 request.go:632] Waited for 195.34906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287112   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287118   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.287126   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.287130   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.290355   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.290978   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.291000   27934 pod_ready.go:82] duration metric: took 400.372479ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.291014   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.486971   27934 request.go:632] Waited for 195.883358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487050   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487059   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.487068   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.487073   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.491124   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:36.686937   27934 request.go:632] Waited for 195.292838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686992   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686998   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.687005   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.687009   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.689912   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:36.690462   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.690479   27934 pod_ready.go:82] duration metric: took 399.458178ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.690490   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.887645   27934 request.go:632] Waited for 197.093805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887721   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.887742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.887752   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.892972   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:37.086834   27934 request.go:632] Waited for 193.310036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086917   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086924   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.086935   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.086940   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.091462   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.091914   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:37.091933   27934 pod_ready.go:82] duration metric: took 401.437262ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:37.091944   27934 pod_ready.go:39] duration metric: took 3.20092896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:37.091963   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:01:37.092013   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:01:37.107184   27934 api_server.go:72] duration metric: took 20.514182215s to wait for apiserver process to appear ...
	I1026 01:01:37.107232   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:01:37.107251   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:01:37.112416   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:01:37.112504   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:01:37.112517   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.112528   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.112539   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.113540   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:01:37.113668   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:01:37.113698   27934 api_server.go:131] duration metric: took 6.458284ms to wait for apiserver health ...
	I1026 01:01:37.113710   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:01:37.287117   27934 request.go:632] Waited for 173.325695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287206   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287218   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.287229   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.287237   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.291660   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.296191   27934 system_pods.go:59] 17 kube-system pods found
	I1026 01:01:37.296219   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.296224   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.296228   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.296232   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.296235   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.296238   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.296241   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.296244   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.296248   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.296251   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.296254   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.296257   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.296260   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.296263   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.296266   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.296269   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.296272   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.296277   27934 system_pods.go:74] duration metric: took 182.559653ms to wait for pod list to return data ...
	I1026 01:01:37.296287   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:01:37.487718   27934 request.go:632] Waited for 191.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487776   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.487783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.487787   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.491586   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.491857   27934 default_sa.go:45] found service account: "default"
	I1026 01:01:37.491878   27934 default_sa.go:55] duration metric: took 195.585476ms for default service account to be created ...
	I1026 01:01:37.491887   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:01:37.687316   27934 request.go:632] Waited for 195.344627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687371   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687376   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.687383   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.687387   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.691369   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.696949   27934 system_pods.go:86] 17 kube-system pods found
	I1026 01:01:37.696973   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.696979   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.696983   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.696988   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.696991   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.696995   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.696999   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.697003   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.697006   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.697010   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.697014   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.697018   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.697021   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.697028   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.697031   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.697034   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.697036   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.697042   27934 system_pods.go:126] duration metric: took 205.150542ms to wait for k8s-apps to be running ...
	I1026 01:01:37.697052   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:01:37.697091   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:01:37.712370   27934 system_svc.go:56] duration metric: took 15.306195ms WaitForService to wait for kubelet
	I1026 01:01:37.712402   27934 kubeadm.go:582] duration metric: took 21.119406025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:01:37.712420   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:01:37.886735   27934 request.go:632] Waited for 174.248578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886856   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886868   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.886878   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.886887   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.890795   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.891473   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891497   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891509   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891513   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891517   27934 node_conditions.go:105] duration metric: took 179.092926ms to run NodePressure ...
	I1026 01:01:37.891528   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:01:37.891553   27934 start.go:255] writing updated cluster config ...
	I1026 01:01:37.893974   27934 out.go:201] 
	I1026 01:01:37.895579   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:37.895693   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.897785   27934 out.go:177] * Starting "ha-300623-m03" control-plane node in "ha-300623" cluster
	I1026 01:01:37.898981   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:01:37.899006   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:01:37.899114   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:01:37.899125   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:01:37.899210   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.900601   27934 start.go:360] acquireMachinesLock for ha-300623-m03: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:01:37.900662   27934 start.go:364] duration metric: took 37.924µs to acquireMachinesLock for "ha-300623-m03"
	I1026 01:01:37.900681   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:37.900777   27934 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1026 01:01:37.902482   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:01:37.902577   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:01:37.902616   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:01:37.917489   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1026 01:01:37.918010   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:01:37.918524   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:01:37.918546   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:01:37.918854   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:01:37.919023   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:01:37.919164   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:01:37.919300   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:01:37.919332   27934 client.go:168] LocalClient.Create starting
	I1026 01:01:37.919365   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:01:37.919401   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919415   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919461   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:01:37.919481   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919492   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919511   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:01:37.919519   27934 main.go:141] libmachine: (ha-300623-m03) Calling .PreCreateCheck
	I1026 01:01:37.919665   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:01:37.920059   27934 main.go:141] libmachine: Creating machine...
	I1026 01:01:37.920075   27934 main.go:141] libmachine: (ha-300623-m03) Calling .Create
	I1026 01:01:37.920211   27934 main.go:141] libmachine: (ha-300623-m03) Creating KVM machine...
	I1026 01:01:37.921465   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing default KVM network
	I1026 01:01:37.921611   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing private KVM network mk-ha-300623
	I1026 01:01:37.921761   27934 main.go:141] libmachine: (ha-300623-m03) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:37.921786   27934 main.go:141] libmachine: (ha-300623-m03) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:01:37.921849   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:37.921742   28699 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:37.921948   27934 main.go:141] libmachine: (ha-300623-m03) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:01:38.168295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.168154   28699 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa...
	I1026 01:01:38.291085   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.290967   28699 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk...
	I1026 01:01:38.291115   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing magic tar header
	I1026 01:01:38.291125   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing SSH key tar header
	I1026 01:01:38.291132   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.291098   28699 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:38.291249   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03
	I1026 01:01:38.291280   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 (perms=drwx------)
	I1026 01:01:38.291294   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:01:38.291307   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:38.291313   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:01:38.291323   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:01:38.291330   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:01:38.291340   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home
	I1026 01:01:38.291363   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:01:38.291374   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Skipping /home - not owner
	I1026 01:01:38.291387   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:01:38.291395   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:01:38.291403   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:01:38.291411   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:01:38.291417   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:38.292244   27934 main.go:141] libmachine: (ha-300623-m03) define libvirt domain using xml: 
	I1026 01:01:38.292268   27934 main.go:141] libmachine: (ha-300623-m03) <domain type='kvm'>
	I1026 01:01:38.292276   27934 main.go:141] libmachine: (ha-300623-m03)   <name>ha-300623-m03</name>
	I1026 01:01:38.292281   27934 main.go:141] libmachine: (ha-300623-m03)   <memory unit='MiB'>2200</memory>
	I1026 01:01:38.292286   27934 main.go:141] libmachine: (ha-300623-m03)   <vcpu>2</vcpu>
	I1026 01:01:38.292290   27934 main.go:141] libmachine: (ha-300623-m03)   <features>
	I1026 01:01:38.292296   27934 main.go:141] libmachine: (ha-300623-m03)     <acpi/>
	I1026 01:01:38.292303   27934 main.go:141] libmachine: (ha-300623-m03)     <apic/>
	I1026 01:01:38.292314   27934 main.go:141] libmachine: (ha-300623-m03)     <pae/>
	I1026 01:01:38.292320   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292330   27934 main.go:141] libmachine: (ha-300623-m03)   </features>
	I1026 01:01:38.292336   27934 main.go:141] libmachine: (ha-300623-m03)   <cpu mode='host-passthrough'>
	I1026 01:01:38.292375   27934 main.go:141] libmachine: (ha-300623-m03)   
	I1026 01:01:38.292393   27934 main.go:141] libmachine: (ha-300623-m03)   </cpu>
	I1026 01:01:38.292406   27934 main.go:141] libmachine: (ha-300623-m03)   <os>
	I1026 01:01:38.292421   27934 main.go:141] libmachine: (ha-300623-m03)     <type>hvm</type>
	I1026 01:01:38.292439   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='cdrom'/>
	I1026 01:01:38.292484   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='hd'/>
	I1026 01:01:38.292496   27934 main.go:141] libmachine: (ha-300623-m03)     <bootmenu enable='no'/>
	I1026 01:01:38.292505   27934 main.go:141] libmachine: (ha-300623-m03)   </os>
	I1026 01:01:38.292533   27934 main.go:141] libmachine: (ha-300623-m03)   <devices>
	I1026 01:01:38.292552   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='cdrom'>
	I1026 01:01:38.292569   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/boot2docker.iso'/>
	I1026 01:01:38.292579   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hdc' bus='scsi'/>
	I1026 01:01:38.292598   27934 main.go:141] libmachine: (ha-300623-m03)       <readonly/>
	I1026 01:01:38.292607   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292617   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='disk'>
	I1026 01:01:38.292641   27934 main.go:141] libmachine: (ha-300623-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:01:38.292657   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk'/>
	I1026 01:01:38.292667   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hda' bus='virtio'/>
	I1026 01:01:38.292685   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292699   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292713   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='mk-ha-300623'/>
	I1026 01:01:38.292722   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292731   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292740   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292749   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='default'/>
	I1026 01:01:38.292759   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292790   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292812   27934 main.go:141] libmachine: (ha-300623-m03)     <serial type='pty'>
	I1026 01:01:38.292821   27934 main.go:141] libmachine: (ha-300623-m03)       <target port='0'/>
	I1026 01:01:38.292825   27934 main.go:141] libmachine: (ha-300623-m03)     </serial>
	I1026 01:01:38.292832   27934 main.go:141] libmachine: (ha-300623-m03)     <console type='pty'>
	I1026 01:01:38.292837   27934 main.go:141] libmachine: (ha-300623-m03)       <target type='serial' port='0'/>
	I1026 01:01:38.292843   27934 main.go:141] libmachine: (ha-300623-m03)     </console>
	I1026 01:01:38.292851   27934 main.go:141] libmachine: (ha-300623-m03)     <rng model='virtio'>
	I1026 01:01:38.292862   27934 main.go:141] libmachine: (ha-300623-m03)       <backend model='random'>/dev/random</backend>
	I1026 01:01:38.292871   27934 main.go:141] libmachine: (ha-300623-m03)     </rng>
	I1026 01:01:38.292879   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292887   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292907   27934 main.go:141] libmachine: (ha-300623-m03)   </devices>
	I1026 01:01:38.292927   27934 main.go:141] libmachine: (ha-300623-m03) </domain>
	I1026 01:01:38.292944   27934 main.go:141] libmachine: (ha-300623-m03) 
	I1026 01:01:38.300030   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:59:6f:46 in network default
	I1026 01:01:38.300611   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring networks are active...
	I1026 01:01:38.300639   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:38.301325   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network default is active
	I1026 01:01:38.301614   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network mk-ha-300623 is active
	I1026 01:01:38.301965   27934 main.go:141] libmachine: (ha-300623-m03) Getting domain xml...
	I1026 01:01:38.302564   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:39.541523   27934 main.go:141] libmachine: (ha-300623-m03) Waiting to get IP...
	I1026 01:01:39.542453   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.542916   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.542942   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.542887   28699 retry.go:31] will retry after 281.419322ms: waiting for machine to come up
	I1026 01:01:39.826321   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.826750   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.826778   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.826737   28699 retry.go:31] will retry after 326.383367ms: waiting for machine to come up
	I1026 01:01:40.155076   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.155490   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.155515   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.155448   28699 retry.go:31] will retry after 321.43703ms: waiting for machine to come up
	I1026 01:01:40.479066   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.479512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.479541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.479464   28699 retry.go:31] will retry after 585.906236ms: waiting for machine to come up
	I1026 01:01:41.068220   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.068712   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.068740   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.068671   28699 retry.go:31] will retry after 528.538636ms: waiting for machine to come up
	I1026 01:01:41.598430   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.599018   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.599040   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.598979   28699 retry.go:31] will retry after 646.897359ms: waiting for machine to come up
	I1026 01:01:42.247537   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:42.247952   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:42.247977   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:42.247889   28699 retry.go:31] will retry after 982.424553ms: waiting for machine to come up
	I1026 01:01:43.231997   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:43.232498   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:43.232526   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:43.232426   28699 retry.go:31] will retry after 920.160573ms: waiting for machine to come up
	I1026 01:01:44.154517   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:44.155015   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:44.155041   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:44.154974   28699 retry.go:31] will retry after 1.233732499s: waiting for machine to come up
	I1026 01:01:45.390175   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:45.390649   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:45.390676   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:45.390595   28699 retry.go:31] will retry after 2.305424014s: waiting for machine to come up
	I1026 01:01:47.698485   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:47.698913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:47.698936   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:47.698861   28699 retry.go:31] will retry after 2.109217289s: waiting for machine to come up
	I1026 01:01:49.810556   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:49.811065   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:49.811095   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:49.811021   28699 retry.go:31] will retry after 3.235213993s: waiting for machine to come up
	I1026 01:01:53.047405   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:53.047859   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:53.047896   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:53.047798   28699 retry.go:31] will retry after 2.928776248s: waiting for machine to come up
	I1026 01:01:55.979004   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:55.979474   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:55.979500   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:55.979422   28699 retry.go:31] will retry after 4.662153221s: waiting for machine to come up
	I1026 01:02:00.643538   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644004   27934 main.go:141] libmachine: (ha-300623-m03) Found IP for machine: 192.168.39.180
	I1026 01:02:00.644032   27934 main.go:141] libmachine: (ha-300623-m03) Reserving static IP address...
	I1026 01:02:00.644046   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has current primary IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644407   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find host DHCP lease matching {name: "ha-300623-m03", mac: "52:54:00:c1:38:db", ip: "192.168.39.180"} in network mk-ha-300623
	I1026 01:02:00.720512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Getting to WaitForSSH function...
	I1026 01:02:00.720543   27934 main.go:141] libmachine: (ha-300623-m03) Reserved static IP address: 192.168.39.180
	I1026 01:02:00.720555   27934 main.go:141] libmachine: (ha-300623-m03) Waiting for SSH to be available...
	I1026 01:02:00.723096   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723544   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.723574   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723782   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH client type: external
	I1026 01:02:00.723802   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa (-rw-------)
	I1026 01:02:00.723832   27934 main.go:141] libmachine: (ha-300623-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:02:00.723848   27934 main.go:141] libmachine: (ha-300623-m03) DBG | About to run SSH command:
	I1026 01:02:00.723870   27934 main.go:141] libmachine: (ha-300623-m03) DBG | exit 0
	I1026 01:02:00.849883   27934 main.go:141] libmachine: (ha-300623-m03) DBG | SSH cmd err, output: <nil>: 
	I1026 01:02:00.850375   27934 main.go:141] libmachine: (ha-300623-m03) KVM machine creation complete!
	I1026 01:02:00.850699   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:00.851242   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851412   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851548   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:02:00.851566   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetState
	I1026 01:02:00.852882   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:02:00.852898   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:02:00.852910   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:02:00.852920   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.855365   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.855806   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.855828   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.856011   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.856209   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856384   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856518   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.856737   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.856963   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.856977   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:02:00.960586   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:00.960610   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:02:00.960620   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.963489   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.963835   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.963855   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.964027   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.964212   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964377   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964520   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.964689   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.964839   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.964850   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:02:01.070154   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:02:01.070243   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:02:01.070253   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:02:01.070260   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070494   27934 buildroot.go:166] provisioning hostname "ha-300623-m03"
	I1026 01:02:01.070509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070670   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.073236   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073643   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.073674   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.074025   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074141   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074309   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.074462   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.074668   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.074685   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m03 && echo "ha-300623-m03" | sudo tee /etc/hostname
	I1026 01:02:01.191755   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m03
	
	I1026 01:02:01.191785   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.194565   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.194928   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.194957   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.195106   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.195276   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195444   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195582   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.195873   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.196084   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.196105   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:02:01.305994   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:01.306027   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:02:01.306044   27934 buildroot.go:174] setting up certificates
	I1026 01:02:01.306053   27934 provision.go:84] configureAuth start
	I1026 01:02:01.306066   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.306391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.308943   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309271   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.309299   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309440   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.311607   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.311976   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.312003   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.312212   27934 provision.go:143] copyHostCerts
	I1026 01:02:01.312245   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312277   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:02:01.312286   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312350   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:02:01.312423   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312441   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:02:01.312445   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312471   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:02:01.312516   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:02:01.312540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312560   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:02:01.312651   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m03 san=[127.0.0.1 192.168.39.180 ha-300623-m03 localhost minikube]
	I1026 01:02:01.465531   27934 provision.go:177] copyRemoteCerts
	I1026 01:02:01.465583   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:02:01.465608   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.468185   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468506   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.468531   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468753   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.468983   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.469158   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.469293   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.551550   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:02:01.551614   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:02:01.576554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:02:01.576635   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:02:01.602350   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:02:01.602435   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:02:01.626219   27934 provision.go:87] duration metric: took 320.153705ms to configureAuth
	I1026 01:02:01.626250   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:02:01.626469   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:01.626540   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.629202   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.629569   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629826   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.630038   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630193   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630349   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.630520   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.630681   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.630695   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:02:01.850626   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:02:01.850656   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:02:01.850666   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetURL
	I1026 01:02:01.851985   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using libvirt version 6000000
	I1026 01:02:01.853953   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854248   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.854277   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854395   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:02:01.854410   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:02:01.854416   27934 client.go:171] duration metric: took 23.935075321s to LocalClient.Create
	I1026 01:02:01.854435   27934 start.go:167] duration metric: took 23.935138215s to libmachine.API.Create "ha-300623"
	I1026 01:02:01.854442   27934 start.go:293] postStartSetup for "ha-300623-m03" (driver="kvm2")
	I1026 01:02:01.854455   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:02:01.854473   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:01.854694   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:02:01.854714   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.856743   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857033   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.857061   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857181   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.857358   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.857509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.857636   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.939727   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:02:01.943512   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:02:01.943536   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:02:01.943602   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:02:01.943673   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:02:01.943683   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:02:01.943769   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:02:01.952556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:01.974588   27934 start.go:296] duration metric: took 120.131633ms for postStartSetup
	I1026 01:02:01.974635   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:01.975249   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.977630   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.977939   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.977966   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.978201   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:02:01.978439   27934 start.go:128] duration metric: took 24.077650452s to createHost
	I1026 01:02:01.978471   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.981153   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981663   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.981690   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981836   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.981994   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982159   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982318   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.982480   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.982694   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.982711   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:02:02.085984   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904522.063699456
	
	I1026 01:02:02.086012   27934 fix.go:216] guest clock: 1729904522.063699456
	I1026 01:02:02.086022   27934 fix.go:229] Guest: 2024-10-26 01:02:02.063699456 +0000 UTC Remote: 2024-10-26 01:02:01.978456379 +0000 UTC m=+140.913817945 (delta=85.243077ms)
	I1026 01:02:02.086043   27934 fix.go:200] guest clock delta is within tolerance: 85.243077ms
	I1026 01:02:02.086049   27934 start.go:83] releasing machines lock for "ha-300623-m03", held for 24.185376811s
	I1026 01:02:02.086067   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.086287   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:02.088913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.089268   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.089295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.091504   27934 out.go:177] * Found network options:
	I1026 01:02:02.092955   27934 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.62
	W1026 01:02:02.094206   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.094236   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.094251   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094989   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.095095   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:02:02.095133   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	W1026 01:02:02.095154   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.095180   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.095247   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:02:02.095268   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:02.097751   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098028   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098086   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098111   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098235   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098497   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098514   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098524   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.098666   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.098717   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098843   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098984   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.099112   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.334862   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:02:02.340486   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:02:02.340547   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:02:02.357805   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:02:02.357834   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:02:02.357898   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:02:02.374996   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:02:02.392000   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:02:02.392086   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:02:02.407807   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:02:02.423965   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:02:02.552274   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:02:02.700711   27934 docker.go:233] disabling docker service ...
	I1026 01:02:02.700771   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:02:02.718236   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:02:02.732116   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:02:02.868905   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:02:02.980683   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:02:02.994225   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:02:03.012791   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:02:03.012857   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.023082   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:02:03.023153   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.033232   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.045462   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.056259   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:02:03.067151   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.077520   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.096669   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.106891   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:02:03.116392   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:02:03.116458   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:02:03.129779   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:02:03.139745   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:03.248476   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:02:03.335933   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:02:03.336001   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:02:03.341028   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:02:03.341087   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:02:03.344865   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:02:03.384107   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:02:03.384182   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.413095   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.443714   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:02:03.445737   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:02:03.447586   27934 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.62
	I1026 01:02:03.449031   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:03.452447   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.452878   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:03.452917   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.453179   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:02:03.457652   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:03.471067   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:02:03.471351   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:03.471669   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.471714   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.487194   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I1026 01:02:03.487657   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.488105   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.488127   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.488437   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.488638   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:02:03.490095   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:03.490500   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.490536   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.506020   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1026 01:02:03.506418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.506947   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.506976   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.507350   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.507527   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:03.507727   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.180
	I1026 01:02:03.507740   27934 certs.go:194] generating shared ca certs ...
	I1026 01:02:03.507758   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.507883   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:02:03.507924   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:02:03.507933   27934 certs.go:256] generating profile certs ...
	I1026 01:02:03.508003   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:02:03.508028   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0
	I1026 01:02:03.508039   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:02:03.728822   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 ...
	I1026 01:02:03.728854   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0: {Name:mk13b323a89a31df62edb3f93e2caa9ef5c95608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729026   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 ...
	I1026 01:02:03.729038   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0: {Name:mk931eb52f244ae5eac81e077cce00cf1844fe8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729110   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:02:03.729242   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:02:03.729367   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:02:03.729382   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:02:03.729396   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:02:03.729409   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:02:03.729443   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:02:03.729457   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:02:03.729475   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:02:03.729491   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:02:03.749554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:02:03.749647   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:02:03.749686   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:02:03.749696   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:02:03.749718   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:02:03.749740   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:02:03.749762   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:02:03.749801   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:03.749827   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:03.749842   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:02:03.749854   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:02:03.749890   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:03.752989   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753341   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:03.753364   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753579   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:03.753776   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:03.753920   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:03.754076   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:03.829849   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:02:03.834830   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:02:03.846065   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:02:03.849963   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:02:03.859787   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:02:03.863509   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:02:03.873244   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:02:03.876871   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:02:03.892364   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:02:03.896520   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:02:03.907397   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:02:03.911631   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:02:03.924039   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:02:03.948397   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:02:03.971545   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:02:03.994742   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:02:04.019083   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1026 01:02:04.043193   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:02:04.066431   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:02:04.089556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:02:04.112422   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:02:04.137648   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:02:04.163111   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:02:04.187974   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:02:04.204419   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:02:04.221407   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:02:04.240446   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:02:04.258125   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:02:04.274506   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:02:04.290927   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:02:04.307309   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:02:04.312975   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:02:04.323808   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328222   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328286   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.334015   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:02:04.344665   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:02:04.355274   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359793   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359862   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.365345   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:02:04.376251   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:02:04.387304   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391720   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391792   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.397948   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:02:04.409356   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:02:04.413518   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:02:04.413569   27934 kubeadm.go:934] updating node {m03 192.168.39.180 8443 v1.31.2 crio true true} ...
	I1026 01:02:04.413666   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:02:04.413689   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:02:04.413726   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:02:04.429892   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:02:04.429970   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:02:04.430030   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.439803   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:02:04.439857   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1026 01:02:04.448847   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1026 01:02:04.448867   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448890   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:04.448924   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:02:04.448969   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.449022   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.453004   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:02:04.453036   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:02:04.477386   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.477445   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:02:04.477465   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:02:04.477513   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.523830   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:02:04.523877   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:02:05.306345   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:02:05.316372   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 01:02:05.333527   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:02:05.350382   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:02:05.366102   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:02:05.369984   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:05.381182   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:05.496759   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:05.512263   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:05.512689   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:05.512740   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:05.531279   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1026 01:02:05.531819   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:05.532966   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:05.532989   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:05.533339   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:05.533529   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:05.533682   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:02:05.533839   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:02:05.533866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:05.536583   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537028   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:05.537057   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537282   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:05.537491   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:05.537676   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:05.537795   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:05.697156   27934 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:05.697206   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443"
	I1026 01:02:29.292626   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443": (23.595390034s)
	I1026 01:02:29.292667   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:02:29.885895   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m03 minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:02:29.997019   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:02:30.136451   27934 start.go:319] duration metric: took 24.602766496s to joinCluster
	I1026 01:02:30.136544   27934 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:30.137000   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:30.137905   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:02:30.139044   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:30.389764   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:30.425326   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:02:30.425691   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:02:30.425759   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:02:30.426058   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:30.426159   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.426170   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.426180   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.426189   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.431156   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:30.926776   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.926801   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.926811   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.926819   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.930142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.426736   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.426771   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.426783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.426791   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.430233   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.926732   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.926744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.926753   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.929704   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:32.426493   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.426514   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.426522   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.426527   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.429836   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:32.430379   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:32.926337   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.926363   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.926376   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.926383   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.929516   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:33.426312   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.426334   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.426342   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.426364   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.430395   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:33.927020   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.927043   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.927050   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.927053   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.930539   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.426611   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.426637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.426649   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.426653   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.429762   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.926585   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.926607   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.926616   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.926622   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.929963   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.930447   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:35.426739   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.426760   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.426786   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.426791   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.429676   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:35.926699   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.926723   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.926731   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.926735   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.930444   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.427052   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.427063   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.427069   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.430961   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.926688   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.926715   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.926726   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.926732   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.930504   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.931114   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:37.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.426568   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.426581   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.426588   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.434793   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:37.926670   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.926699   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.926711   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.926717   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.929364   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:38.427306   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.427327   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.427335   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.427339   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.434499   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:38.926882   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.926902   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.926911   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.926914   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.930831   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:38.931460   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:39.427252   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.427274   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.427283   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.427286   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.430650   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:39.926620   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.926643   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.926654   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.926661   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.930077   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.426363   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.426396   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.426408   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.426414   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.429976   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.926280   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.926310   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.926320   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.926325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.929942   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.426556   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.426563   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.426568   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.430315   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.431209   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:41.926498   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.926522   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.926529   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.926534   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.929738   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.426973   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.427006   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.427013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.427019   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.430244   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.927247   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.927275   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.927283   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.927288   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.930906   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.426731   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.426759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.426768   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.426773   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.430712   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.431301   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:43.926784   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.926823   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.926832   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.926835   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.929957   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.427237   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.427258   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.427266   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.427270   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.430769   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.926731   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.926740   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.926743   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.930247   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.427043   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.427065   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.427074   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.427079   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.430820   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.431387   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:45.927275   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.927296   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.927304   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.927306   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.930627   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.426245   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.426266   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.426274   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.426278   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.429561   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.926352   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.926373   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.926384   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.926390   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.929454   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.426420   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.426462   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.426472   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.426477   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.430019   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.926864   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.926889   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.926900   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.926906   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.929997   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.930569   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:48.426656   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.426693   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.426709   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.426716   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.435417   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:48.436037   27934 node_ready.go:49] node "ha-300623-m03" has status "Ready":"True"
	I1026 01:02:48.436062   27934 node_ready.go:38] duration metric: took 18.009981713s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:48.436077   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:48.436165   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:48.436180   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.436190   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.436203   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.442639   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:02:48.450258   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.450343   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:02:48.450349   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.450356   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.450360   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.454261   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.454872   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.454888   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.454895   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.454900   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.459379   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:48.460137   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.460155   27934 pod_ready.go:82] duration metric: took 9.869467ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460165   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460215   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:02:48.460224   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.460231   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.460233   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.463232   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.463771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.463783   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.463792   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.463797   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.466281   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.466732   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.466748   27934 pod_ready.go:82] duration metric: took 6.577285ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466762   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466818   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:02:48.466826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.466833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.466837   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.469268   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.469931   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.469946   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.469953   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.469957   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.472212   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.472664   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.472682   27934 pod_ready.go:82] duration metric: took 5.914156ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472691   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472750   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:02:48.472759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.472766   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.472770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.475167   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.475777   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:48.475794   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.475802   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.475806   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.478259   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.478687   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.478703   27934 pod_ready.go:82] duration metric: took 6.006167ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.478711   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.627599   27934 request.go:632] Waited for 148.830245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627657   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627667   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.627674   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.627680   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.631663   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.827561   27934 request.go:632] Waited for 195.345637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827630   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.827645   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.827649   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.831042   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.831791   27934 pod_ready.go:93] pod "etcd-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.831815   27934 pod_ready.go:82] duration metric: took 353.094836ms for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.831835   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.027283   27934 request.go:632] Waited for 195.388128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027360   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027365   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.027373   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.027380   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.030439   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.227538   27934 request.go:632] Waited for 196.377694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227614   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227627   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.227643   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.227650   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.230823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.231339   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.231360   27934 pod_ready.go:82] duration metric: took 399.517961ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.231374   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.426746   27934 request.go:632] Waited for 195.299777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426820   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.426833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.426842   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.430033   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.626896   27934 request.go:632] Waited for 196.298512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626964   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626970   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.626977   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.626980   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.630142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.630626   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.630645   27934 pod_ready.go:82] duration metric: took 399.259883ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.630655   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.826666   27934 request.go:632] Waited for 195.934282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826722   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826727   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.826739   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.826744   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.830021   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.027111   27934 request.go:632] Waited for 196.361005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027198   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027210   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.027222   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.027231   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.030533   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.031215   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.031238   27934 pod_ready.go:82] duration metric: took 400.574994ms for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.031268   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.227253   27934 request.go:632] Waited for 195.903041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227309   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227314   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.227321   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.227325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.230415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.427535   27934 request.go:632] Waited for 196.340381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427594   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427602   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.427612   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.427619   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.430823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.431395   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.431413   27934 pod_ready.go:82] duration metric: took 400.135776ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.431426   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.626990   27934 request.go:632] Waited for 195.470744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627069   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627075   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.627082   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.627087   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.630185   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.827370   27934 request.go:632] Waited for 196.34647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827442   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827448   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.827455   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.827461   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.831085   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.831842   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.831859   27934 pod_ready.go:82] duration metric: took 400.426225ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.831869   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.027015   27934 request.go:632] Waited for 195.078027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027084   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027092   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.027099   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.027103   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.031047   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.227422   27934 request.go:632] Waited for 195.619523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227479   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227484   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.227492   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.227495   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.231982   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:51.232544   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.232570   27934 pod_ready.go:82] duration metric: took 400.691296ms for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.232584   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.427652   27934 request.go:632] Waited for 194.988908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427748   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427756   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.427763   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.427769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.431107   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.627383   27934 request.go:632] Waited for 195.646071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627443   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627450   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.627459   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.627465   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.630345   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:51.630913   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.630940   27934 pod_ready.go:82] duration metric: took 398.33791ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.630957   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.826903   27934 request.go:632] Waited for 195.872288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826976   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826981   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.826989   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.826995   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.830596   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.027634   27934 request.go:632] Waited for 196.404478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027720   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027729   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.027740   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.027744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.031724   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.032488   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.032512   27934 pod_ready.go:82] duration metric: took 401.542551ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.032525   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.227636   27934 request.go:632] Waited for 195.035156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.227705   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.227713   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.230866   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.426675   27934 request.go:632] Waited for 195.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426757   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426765   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.426775   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.426782   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.429979   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.430570   27934 pod_ready.go:93] pod "kube-proxy-mv7sf" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.430594   27934 pod_ready.go:82] duration metric: took 398.058369ms for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.430608   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.627616   27934 request.go:632] Waited for 196.938648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.627704   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.627709   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.631135   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.827333   27934 request.go:632] Waited for 195.390365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827388   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827397   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.827404   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.827409   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.830746   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.831581   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.831599   27934 pod_ready.go:82] duration metric: took 400.983275ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.831611   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.026899   27934 request.go:632] Waited for 195.225563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026954   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026959   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.026967   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.026971   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.030270   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.227500   27934 request.go:632] Waited for 196.386112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227559   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227564   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.227572   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.227577   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.231336   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.231867   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.231885   27934 pod_ready.go:82] duration metric: took 400.266151ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.231896   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.426974   27934 request.go:632] Waited for 194.996598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427030   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.427037   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.427041   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.430377   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.626766   27934 request.go:632] Waited for 195.735993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626829   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.626836   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.626840   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.630167   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.630954   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.630975   27934 pod_ready.go:82] duration metric: took 399.071645ms for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.630992   27934 pod_ready.go:39] duration metric: took 5.19490109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:53.631015   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:02:53.631076   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:02:53.646977   27934 api_server.go:72] duration metric: took 23.510394339s to wait for apiserver process to appear ...
	I1026 01:02:53.647007   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:02:53.647030   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:02:53.651895   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:02:53.651966   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:02:53.651972   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.651979   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.651983   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.652674   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:02:53.652802   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:02:53.652821   27934 api_server.go:131] duration metric: took 5.805941ms to wait for apiserver health ...
	I1026 01:02:53.652830   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:02:53.827168   27934 request.go:632] Waited for 174.273301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827222   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827228   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.827235   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.827240   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.834306   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:53.841838   27934 system_pods.go:59] 24 kube-system pods found
	I1026 01:02:53.841872   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:53.841879   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:53.841885   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:53.841891   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:53.841897   27934 system_pods.go:61] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:53.841901   27934 system_pods.go:61] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:53.841906   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:53.841911   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:53.841916   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:53.841921   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:53.841927   27934 system_pods.go:61] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:53.841932   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:53.841938   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:53.841945   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:53.841951   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:53.841959   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:53.841964   27934 system_pods.go:61] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:53.841970   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:53.841976   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:53.841982   27934 system_pods.go:61] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:53.841992   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:53.841998   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:53.842006   27934 system_pods.go:61] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:53.842011   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:53.842020   27934 system_pods.go:74] duration metric: took 189.182306ms to wait for pod list to return data ...
	I1026 01:02:53.842033   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:02:54.027353   27934 request.go:632] Waited for 185.245125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027412   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027420   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.027431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.027441   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.030973   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.031077   27934 default_sa.go:45] found service account: "default"
	I1026 01:02:54.031089   27934 default_sa.go:55] duration metric: took 189.048618ms for default service account to be created ...
	I1026 01:02:54.031098   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:02:54.227423   27934 request.go:632] Waited for 196.255704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227493   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.227507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.227517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.232907   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:02:54.240539   27934 system_pods.go:86] 24 kube-system pods found
	I1026 01:02:54.240565   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:54.240571   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:54.240574   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:54.240578   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:54.240582   27934 system_pods.go:89] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:54.240586   27934 system_pods.go:89] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:54.240589   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:54.240592   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:54.240595   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:54.240599   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:54.240602   27934 system_pods.go:89] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:54.240606   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:54.240609   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:54.240613   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:54.240616   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:54.240620   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:54.240624   27934 system_pods.go:89] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:54.240627   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:54.240632   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:54.240635   27934 system_pods.go:89] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:54.240641   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:54.240644   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:54.240647   27934 system_pods.go:89] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:54.240650   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:54.240656   27934 system_pods.go:126] duration metric: took 209.550822ms to wait for k8s-apps to be running ...
	I1026 01:02:54.240667   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:02:54.240705   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:54.259476   27934 system_svc.go:56] duration metric: took 18.80003ms WaitForService to wait for kubelet
	I1026 01:02:54.259503   27934 kubeadm.go:582] duration metric: took 24.122925603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:02:54.259520   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:02:54.427334   27934 request.go:632] Waited for 167.728559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427409   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427417   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.427430   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.427440   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.431191   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.432324   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432349   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432365   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432369   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432378   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432383   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432391   27934 node_conditions.go:105] duration metric: took 172.867066ms to run NodePressure ...
	I1026 01:02:54.432404   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:02:54.432431   27934 start.go:255] writing updated cluster config ...
	I1026 01:02:54.432784   27934 ssh_runner.go:195] Run: rm -f paused
	I1026 01:02:54.484591   27934 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 01:02:54.487070   27934 out.go:177] * Done! kubectl is now configured to use "ha-300623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.406918147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4df0cb9c-c31c-49c1-ba80-37b59c825dae name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.408060276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f552b30-da7d-449b-b561-3b6966407831 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.408474147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799408452249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f552b30-da7d-449b-b561-3b6966407831 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.409084444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63a871c1-fc77-4d8a-83dc-4d2379f7940e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.409144752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63a871c1-fc77-4d8a-83dc-4d2379f7940e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.409802087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63a871c1-fc77-4d8a-83dc-4d2379f7940e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.445861095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=086e1b0d-f8b1-4ef6-bb65-620c24a8bf32 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.445937028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=086e1b0d-f8b1-4ef6-bb65-620c24a8bf32 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.446893291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bebf4c1-4022-4392-a8a1-4841fb1747b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.447298120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799447278281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bebf4c1-4022-4392-a8a1-4841fb1747b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.447910637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d75c945c-7663-4829-868d-14647c62df68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.447979524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d75c945c-7663-4829-868d-14647c62df68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.448329630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d75c945c-7663-4829-868d-14647c62df68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.486156511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3da86a94-0f3d-4967-b1ea-5ccf903650f6 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.486230047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3da86a94-0f3d-4967-b1ea-5ccf903650f6 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.487503972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a3c8d82-fe7a-4135-9fff-353b2d79ac6c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.487981989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799487955766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a3c8d82-fe7a-4135-9fff-353b2d79ac6c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.488595072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b1881f5-4f1e-4c0c-bb6d-baecd79ea14c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.488701854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b1881f5-4f1e-4c0c-bb6d-baecd79ea14c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.488933335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b1881f5-4f1e-4c0c-bb6d-baecd79ea14c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.505270385Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ae88c7e3-fec4-4f41-9de9-af0194a9c575 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.505548294Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-x8rtl,Uid:6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904575706714688,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T01:02:55.380549223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qx24f,Uid:d7fc0eb5-4828-436f-a5c8-8de607f590cf,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1729904438786386214,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7fc0eb5-4828-436f-a5c8-8de607f590cf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T01:00:37.571896743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntmgc,Uid:b2e07a8a-ed53-4151-9cdd-6345d84fea7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904438785251518,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-10-26T01:00:37.575495733Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:28d286b1-45b3-4775-a8ff-47dc3cb84792,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904437888943909,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-26T01:00:37.582414256Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&PodSandboxMetadata{Name:kindnet-4cqmf,Uid:c887471a-629c-4bf1-9296-8ccb5ba56cd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904425380985873,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-26T01:00:23.564834591Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&PodSandboxMetadata{Name:kube-proxy-65rns,Uid:895d0bd9-0f38-442f-99a2-6c5c70bddd39,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904425378867404,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T01:00:23.562085717Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-300623,Uid:410b9cc8959a0fa37bf3160dd4fd727c,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1729904412347047838,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{kubernetes.io/config.hash: 410b9cc8959a0fa37bf3160dd4fd727c,kubernetes.io/config.seen: 2024-10-26T01:00:11.844787188Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-300623,Uid:3667e64614764ba947adeb95343bcaa4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904412334943636,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: 3667e64614764ba947adeb95343bcaa4,kubernetes.io/config.seen: 2024-10-26T01:00:11.844785068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-300623,Uid:7ffe5fa9ca4441188a606a24bdbe8722,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904412332428042,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ffe5fa9ca4441188a606a24bdbe8722,kubernetes.io/config.seen: 2024-10-26T01:00:11.844786414Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&PodSandboxMetadata{Name:etcd-ha-300623,Uid:75551103
2387c79ea08c24551165d530,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904412326410681,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.183:2379,kubernetes.io/config.hash: 755511032387c79ea08c24551165d530,kubernetes.io/config.seen: 2024-10-26T01:00:11.844779536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-300623,Uid:48b8c6bdc451f81cc4a6c8319036ea10,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729904412306749441,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.183:8443,kubernetes.io/config.hash: 48b8c6bdc451f81cc4a6c8319036ea10,kubernetes.io/config.seen: 2024-10-26T01:00:11.844783442Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ae88c7e3-fec4-4f41-9de9-af0194a9c575 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.507100276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eedd18f-85a9-4909-b551-09c290ac904f name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.507162284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eedd18f-85a9-4909-b551-09c290ac904f name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:39 ha-300623 crio[655]: time="2024-10-26 01:06:39.507615441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eedd18f-85a9-4909-b551-09c290ac904f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85cbf0b8850a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   731eca9181f8b       busybox-7dff88458-x8rtl
	ca2bd9d7fe0a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20e3c054f64b8       coredns-7c65d6cfc9-ntmgc
	56c849c3f6d25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d580ea18268bf       coredns-7c65d6cfc9-qx24f
	862c0633984db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f6635176e0517       storage-provisioner
	d6d0d55128c15       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   cffe8a0cf602c       kindnet-4cqmf
	f7fca08cb5de6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   94078692adcf1       kube-proxy-65rns
	a103c72040168       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   620e95994188b       kube-vip-ha-300623
	47a0b2ec9c50d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   f86f0547d7e3f       kube-controller-manager-ha-300623
	3e321e090fa4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a63bff1c62868       etcd-ha-300623
	3c25e47b58ddc       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9b38c5bcef6f6       kube-scheduler-ha-300623
	3bcea9b84ac37       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   e9bc0343ef669       kube-apiserver-ha-300623
	
	
	==> coredns [56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d] <==
	[INFO] 10.244.0.4:35752 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083964s
	[INFO] 10.244.0.4:46160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070172s
	[INFO] 10.244.2.2:48496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233704s
	[INFO] 10.244.2.2:43326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002692245s
	[INFO] 10.244.1.2:54632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145197s
	[INFO] 10.244.1.2:39137 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866788s
	[INFO] 10.244.1.2:37569 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241474s
	[INFO] 10.244.0.4:42983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170463s
	[INFO] 10.244.0.4:34095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002204796s
	[INFO] 10.244.0.4:47258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001867963s
	[INFO] 10.244.0.4:59491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141493s
	[INFO] 10.244.0.4:57514 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133403s
	[INFO] 10.244.0.4:45585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174758s
	[INFO] 10.244.2.2:57387 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165086s
	[INFO] 10.244.2.2:37898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136051s
	[INFO] 10.244.1.2:45240 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130797s
	[INFO] 10.244.1.2:40585 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000259318s
	[INFO] 10.244.1.2:54189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089088s
	[INFO] 10.244.1.2:56872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108098s
	[INFO] 10.244.0.4:43642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083444s
	[INFO] 10.244.2.2:37138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161058s
	[INFO] 10.244.1.2:45522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237498s
	[INFO] 10.244.1.2:48964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122296s
	[INFO] 10.244.0.4:46128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168182s
	[INFO] 10.244.0.4:35635 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143147s
	
	
	==> coredns [ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758] <==
	[INFO] 10.244.2.2:54963 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004547023s
	[INFO] 10.244.2.2:34531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244595s
	[INFO] 10.244.2.2:44217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000362208s
	[INFO] 10.244.2.2:60780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018037s
	[INFO] 10.244.2.2:60725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000259265s
	[INFO] 10.244.2.2:33992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168214s
	[INFO] 10.244.1.2:48441 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000237097s
	[INFO] 10.244.1.2:50414 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002508011s
	[INFO] 10.244.1.2:36962 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211094s
	[INFO] 10.244.1.2:45147 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163251s
	[INFO] 10.244.1.2:56149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125966s
	[INFO] 10.244.0.4:56735 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092196s
	[INFO] 10.244.0.4:37487 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002015s
	[INFO] 10.244.2.2:53825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125794s
	[INFO] 10.244.2.2:52505 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213989s
	[INFO] 10.244.0.4:37131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125177s
	[INFO] 10.244.0.4:45742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131329s
	[INFO] 10.244.0.4:52634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089226s
	[INFO] 10.244.2.2:58146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286556s
	[INFO] 10.244.2.2:59488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000218728s
	[INFO] 10.244.2.2:51165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028421s
	[INFO] 10.244.1.2:37736 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160474s
	[INFO] 10.244.1.2:60585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000238531s
	[INFO] 10.244.0.4:46233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078598s
	[INFO] 10.244.0.4:39578 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277206s
	
	
	==> describe nodes <==
	Name:               ha-300623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:00:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-300623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92684f32bf5c4a5ea50d57cd59f5b8ee
	  System UUID:                92684f32-bf5c-4a5e-a50d-57cd59f5b8ee
	  Boot ID:                    3d5330c9-a2ef-4296-ab11-4c9bb32f97df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x8rtl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-ntmgc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-qx24f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-300623                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-4cqmf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-300623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-300623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-65rns                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-300623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-300623                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m13s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node ha-300623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node ha-300623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node ha-300623 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  NodeReady                6m2s   kubelet          Node ha-300623 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	
	
	Name:               ha-300623-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:01:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-300623-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 619e0e81a0ef43a9b2e79bbc4eb9355e
	  System UUID:                619e0e81-a0ef-43a9-b2e7-9bbc4eb9355e
	  Boot ID:                    89b92f6c-664b-4721-8f8c-216a0ad0c2d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qtdcl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-300623-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-g5bkb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-300623-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-ha-300623-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-7hn2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-300623-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-vip-ha-300623-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-300623-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-300623-m02 status is now: NodeNotReady
	
	
	Name:               ha-300623-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:02:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    ha-300623-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97987e99f2594f70b58fe3aa149b6c7c
	  System UUID:                97987e99-f259-4f70-b58f-e3aa149b6c7c
	  Boot ID:                    7e140c77-fbc1-46f9-addb-72cf937d1703
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mbn94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-300623-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-2v827                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-300623-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-300623-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-proxy-mv7sf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-300623-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-300623-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-300623-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	
	
	Name:               ha-300623-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-300623-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 505edce099ab4a75b83037ad7ab46771
	  System UUID:                505edce0-99ab-4a75-b830-37ad7ab46771
	  Boot ID:                    896f9280-eb70-46a8-9d85-c3814086494a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsnn6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-4zk2k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)  kubelet          Node ha-300623-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-300623-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050258] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782226] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951939] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.521399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct26 01:00] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.061621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060766] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.166618] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145628] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268359] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.874441] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.666530] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060776] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.257866] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.091250] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.528305] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.572352] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 01:01] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901] <==
	{"level":"warn","ts":"2024-10-26T01:06:39.659028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.726424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.735902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.739909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.748184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.754109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.759210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.759601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.762812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.765251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.769732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.774965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.782100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.785540Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.788359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.793322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.798989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.804896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.807887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.810361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.813734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.821086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.827568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.858674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:39.860528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:06:39 up 6 min,  0 users,  load average: 0.14, 0.24, 0.13
	Linux ha-300623 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde] <==
	I1026 01:06:07.184462       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:17.174569       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:17.174737       1 main.go:300] handling current node
	I1026 01:06:17.174803       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:17.174825       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:17.175067       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:17.175100       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:17.175206       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:17.175228       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:27.175173       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:27.175288       1 main.go:300] handling current node
	I1026 01:06:27.175317       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:27.175335       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:27.175551       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:27.175580       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:27.175762       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:27.175795       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:37.177801       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:37.177885       1 main.go:300] handling current node
	I1026 01:06:37.177904       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:37.177911       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:37.178155       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:37.178179       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:37.178289       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:37.178308       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d] <==
	W1026 01:00:17.926981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1026 01:00:17.928181       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 01:00:17.935826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:00:17.947904       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 01:00:18.894624       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 01:00:18.916292       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:00:19.043184       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 01:00:23.502518       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:00:23.580105       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1026 01:03:00.396346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48596: use of closed network connection
	E1026 01:03:00.597696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48608: use of closed network connection
	E1026 01:03:00.779383       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48638: use of closed network connection
	E1026 01:03:00.968960       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48650: use of closed network connection
	E1026 01:03:01.159859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48672: use of closed network connection
	E1026 01:03:01.356945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48682: use of closed network connection
	E1026 01:03:01.529718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1026 01:03:01.709409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60606: use of closed network connection
	E1026 01:03:01.891333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60636: use of closed network connection
	E1026 01:03:02.183836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60668: use of closed network connection
	E1026 01:03:02.371592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60688: use of closed network connection
	E1026 01:03:02.545427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60698: use of closed network connection
	E1026 01:03:02.716320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60708: use of closed network connection
	E1026 01:03:02.895527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60734: use of closed network connection
	E1026 01:03:03.082972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60756: use of closed network connection
	W1026 01:04:27.938129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.180 192.168.39.183]
	
	
	==> kube-controller-manager [47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3] <==
	I1026 01:03:33.037458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.051536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.162489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	E1026 01:03:33.296244       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"ff6c8323-43e2-4224-a2c5-fbee23186204\", ResourceVersion:\"911\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 26, 1, 0, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\",
\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\"
:\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b16180), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\
", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641908), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeCl
aimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641920), EmptyDir:(*v1.EmptyDirVolumeSource)(n
il), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVo
lumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641938), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azur
eFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b161a0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSou
rce)(0xc001b161e0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false,
RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc002a7eba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002879af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002835100), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ove
rhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0029fa100)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002879b40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1026 01:03:33.604085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:35.173961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.911095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.978536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.761108       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-300623-m04"
	I1026 01:03:37.763013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.822795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:43.288569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:52.993775       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:03:52.994235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:53.016162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:55.127200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:03.835355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:47.785209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:04:47.785779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.821461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.859957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.530512ms"
	I1026 01:04:47.860782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.115µs"
	I1026 01:04:50.162222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:52.952538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	
	
	==> kube-proxy [f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 01:00:25.689413       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 01:00:25.723767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1026 01:00:25.723854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 01:00:25.758166       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 01:00:25.758214       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 01:00:25.758247       1 server_linux.go:169] "Using iptables Proxier"
	I1026 01:00:25.760715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 01:00:25.761068       1 server.go:483] "Version info" version="v1.31.2"
	I1026 01:00:25.761102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:00:25.763718       1 config.go:199] "Starting service config controller"
	I1026 01:00:25.763757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 01:00:25.763790       1 config.go:105] "Starting endpoint slice config controller"
	I1026 01:00:25.763796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 01:00:25.764426       1 config.go:328] "Starting node config controller"
	I1026 01:00:25.764461       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 01:00:25.864157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 01:00:25.864237       1 shared_informer.go:320] Caches are synced for service config
	I1026 01:00:25.864661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b] <==
	I1026 01:02:26.440503       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2v827" node="ha-300623-m03"
	E1026 01:02:55.345123       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.345196       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1d2aa5b5-e44c-4423-a263-a19406face68(default/busybox-7dff88458-qtdcl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qtdcl"
	E1026 01:02:55.345218       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" pod="default/busybox-7dff88458-qtdcl"
	I1026 01:02:55.345275       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.394267       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394343       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5(default/busybox-7dff88458-x8rtl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x8rtl"
	E1026 01:02:55.394364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" pod="default/busybox-7dff88458-x8rtl"
	I1026 01:02:55.394386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394962       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:02:55.395010       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dd5257f3-d0ba-4672-9836-da890e32fb0d(default/busybox-7dff88458-mbn94) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-mbn94"
	E1026 01:02:55.395023       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" pod="default/busybox-7dff88458-mbn94"
	I1026 01:02:55.395037       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:03:33.099592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.101341       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e40741c-73a0-41fa-b38f-a59fed42525b(kube-system/kube-proxy-4zk2k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4zk2k"
	E1026 01:03:33.101520       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-4zk2k"
	I1026 01:03:33.101594       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.102404       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.109277       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 952ba5f9-93b1-4543-8b73-3ac1600315fc(kube-system/kindnet-l58kk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l58kk"
	E1026 01:03:33.109487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-l58kk"
	I1026 01:03:33.109689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.136820       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lm6x" node="ha-300623-m04"
	E1026 01:03:33.137312       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-5lm6x"
	E1026 01:03:33.152104       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jhv9k" node="ha-300623-m04"
	E1026 01:03:33.153545       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-jhv9k"
	
	
	==> kubelet <==
	Oct 26 01:05:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:05:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171492    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171604    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173388    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173412    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176311    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176778    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179258    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179567    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181750    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181791    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183203    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183277    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.106419    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185785    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185827    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188435    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188477    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190241    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190296    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr: (4.205254609s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (1.288357463s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m03_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-300623 node start m02 -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:59:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:59:41.102327   27934 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:41.102422   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102427   27934 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:41.102431   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102629   27934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:41.103175   27934 out.go:352] Setting JSON to false
	I1026 00:59:41.103986   27934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2521,"bootTime":1729901860,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:41.104085   27934 start.go:139] virtualization: kvm guest
	I1026 00:59:41.106060   27934 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:41.107343   27934 notify.go:220] Checking for updates...
	I1026 00:59:41.107361   27934 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:41.108566   27934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:41.109853   27934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:41.111166   27934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.112531   27934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:41.113798   27934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:41.115167   27934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:41.148833   27934 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:59:41.150115   27934 start.go:297] selected driver: kvm2
	I1026 00:59:41.150128   27934 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:59:41.150139   27934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:41.150812   27934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.150910   27934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:59:41.165692   27934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:59:41.165750   27934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:59:41.166043   27934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:59:41.166082   27934 cni.go:84] Creating CNI manager for ""
	I1026 00:59:41.166138   27934 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1026 00:59:41.166151   27934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:59:41.166210   27934 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:41.166340   27934 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.168250   27934 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 00:59:41.169625   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:59:41.169671   27934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:59:41.169699   27934 cache.go:56] Caching tarball of preloaded images
	I1026 00:59:41.169771   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:59:41.169781   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:59:41.170066   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 00:59:41.170083   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json: {Name:mkc18d341848fb714503df8b4bfc42be69331fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:59:41.170205   27934 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:59:41.170231   27934 start.go:364] duration metric: took 14.614µs to acquireMachinesLock for "ha-300623"
	I1026 00:59:41.170247   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:59:41.170298   27934 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:59:41.171896   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 00:59:41.172034   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:41.172078   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:41.186522   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I1026 00:59:41.186988   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:41.187517   27934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:41.187539   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:41.187925   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:41.188146   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 00:59:41.188284   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 00:59:41.188436   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 00:59:41.188472   27934 client.go:168] LocalClient.Create starting
	I1026 00:59:41.188506   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:59:41.188539   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188554   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188604   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:59:41.188622   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188635   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188652   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 00:59:41.188664   27934 main.go:141] libmachine: (ha-300623) Calling .PreCreateCheck
	I1026 00:59:41.189023   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 00:59:41.189374   27934 main.go:141] libmachine: Creating machine...
	I1026 00:59:41.189386   27934 main.go:141] libmachine: (ha-300623) Calling .Create
	I1026 00:59:41.189526   27934 main.go:141] libmachine: (ha-300623) Creating KVM machine...
	I1026 00:59:41.190651   27934 main.go:141] libmachine: (ha-300623) DBG | found existing default KVM network
	I1026 00:59:41.191301   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.191170   27957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1026 00:59:41.191329   27934 main.go:141] libmachine: (ha-300623) DBG | created network xml: 
	I1026 00:59:41.191339   27934 main.go:141] libmachine: (ha-300623) DBG | <network>
	I1026 00:59:41.191366   27934 main.go:141] libmachine: (ha-300623) DBG |   <name>mk-ha-300623</name>
	I1026 00:59:41.191399   27934 main.go:141] libmachine: (ha-300623) DBG |   <dns enable='no'/>
	I1026 00:59:41.191415   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191424   27934 main.go:141] libmachine: (ha-300623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:59:41.191431   27934 main.go:141] libmachine: (ha-300623) DBG |     <dhcp>
	I1026 00:59:41.191438   27934 main.go:141] libmachine: (ha-300623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:59:41.191445   27934 main.go:141] libmachine: (ha-300623) DBG |     </dhcp>
	I1026 00:59:41.191450   27934 main.go:141] libmachine: (ha-300623) DBG |   </ip>
	I1026 00:59:41.191457   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191462   27934 main.go:141] libmachine: (ha-300623) DBG | </network>
	I1026 00:59:41.191489   27934 main.go:141] libmachine: (ha-300623) DBG | 
	I1026 00:59:41.196331   27934 main.go:141] libmachine: (ha-300623) DBG | trying to create private KVM network mk-ha-300623 192.168.39.0/24...
	I1026 00:59:41.258139   27934 main.go:141] libmachine: (ha-300623) DBG | private KVM network mk-ha-300623 192.168.39.0/24 created
	I1026 00:59:41.258172   27934 main.go:141] libmachine: (ha-300623) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.258186   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.258104   27957 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.258203   27934 main.go:141] libmachine: (ha-300623) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:59:41.258226   27934 main.go:141] libmachine: (ha-300623) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:59:41.511971   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.511837   27957 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa...
	I1026 00:59:41.679961   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679835   27957 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk...
	I1026 00:59:41.680008   27934 main.go:141] libmachine: (ha-300623) DBG | Writing magic tar header
	I1026 00:59:41.680023   27934 main.go:141] libmachine: (ha-300623) DBG | Writing SSH key tar header
	I1026 00:59:41.680037   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679951   27957 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.680109   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623
	I1026 00:59:41.680139   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:59:41.680156   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 (perms=drwx------)
	I1026 00:59:41.680166   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.680185   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:59:41.680194   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:59:41.680209   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:59:41.680219   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home
	I1026 00:59:41.680230   27934 main.go:141] libmachine: (ha-300623) DBG | Skipping /home - not owner
	I1026 00:59:41.680244   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:59:41.680257   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:59:41.680313   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:59:41.680344   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:59:41.680359   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:59:41.680367   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:41.681340   27934 main.go:141] libmachine: (ha-300623) define libvirt domain using xml: 
	I1026 00:59:41.681362   27934 main.go:141] libmachine: (ha-300623) <domain type='kvm'>
	I1026 00:59:41.681370   27934 main.go:141] libmachine: (ha-300623)   <name>ha-300623</name>
	I1026 00:59:41.681381   27934 main.go:141] libmachine: (ha-300623)   <memory unit='MiB'>2200</memory>
	I1026 00:59:41.681403   27934 main.go:141] libmachine: (ha-300623)   <vcpu>2</vcpu>
	I1026 00:59:41.681438   27934 main.go:141] libmachine: (ha-300623)   <features>
	I1026 00:59:41.681448   27934 main.go:141] libmachine: (ha-300623)     <acpi/>
	I1026 00:59:41.681452   27934 main.go:141] libmachine: (ha-300623)     <apic/>
	I1026 00:59:41.681457   27934 main.go:141] libmachine: (ha-300623)     <pae/>
	I1026 00:59:41.681471   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681479   27934 main.go:141] libmachine: (ha-300623)   </features>
	I1026 00:59:41.681484   27934 main.go:141] libmachine: (ha-300623)   <cpu mode='host-passthrough'>
	I1026 00:59:41.681489   27934 main.go:141] libmachine: (ha-300623)   
	I1026 00:59:41.681494   27934 main.go:141] libmachine: (ha-300623)   </cpu>
	I1026 00:59:41.681500   27934 main.go:141] libmachine: (ha-300623)   <os>
	I1026 00:59:41.681504   27934 main.go:141] libmachine: (ha-300623)     <type>hvm</type>
	I1026 00:59:41.681512   27934 main.go:141] libmachine: (ha-300623)     <boot dev='cdrom'/>
	I1026 00:59:41.681520   27934 main.go:141] libmachine: (ha-300623)     <boot dev='hd'/>
	I1026 00:59:41.681528   27934 main.go:141] libmachine: (ha-300623)     <bootmenu enable='no'/>
	I1026 00:59:41.681532   27934 main.go:141] libmachine: (ha-300623)   </os>
	I1026 00:59:41.681539   27934 main.go:141] libmachine: (ha-300623)   <devices>
	I1026 00:59:41.681544   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='cdrom'>
	I1026 00:59:41.681575   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/boot2docker.iso'/>
	I1026 00:59:41.681594   27934 main.go:141] libmachine: (ha-300623)       <target dev='hdc' bus='scsi'/>
	I1026 00:59:41.681606   27934 main.go:141] libmachine: (ha-300623)       <readonly/>
	I1026 00:59:41.681615   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681625   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='disk'>
	I1026 00:59:41.681635   27934 main.go:141] libmachine: (ha-300623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:59:41.681651   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk'/>
	I1026 00:59:41.681664   27934 main.go:141] libmachine: (ha-300623)       <target dev='hda' bus='virtio'/>
	I1026 00:59:41.681675   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681686   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681698   27934 main.go:141] libmachine: (ha-300623)       <source network='mk-ha-300623'/>
	I1026 00:59:41.681709   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681719   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681734   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681746   27934 main.go:141] libmachine: (ha-300623)       <source network='default'/>
	I1026 00:59:41.681756   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681773   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681784   27934 main.go:141] libmachine: (ha-300623)     <serial type='pty'>
	I1026 00:59:41.681794   27934 main.go:141] libmachine: (ha-300623)       <target port='0'/>
	I1026 00:59:41.681803   27934 main.go:141] libmachine: (ha-300623)     </serial>
	I1026 00:59:41.681813   27934 main.go:141] libmachine: (ha-300623)     <console type='pty'>
	I1026 00:59:41.681823   27934 main.go:141] libmachine: (ha-300623)       <target type='serial' port='0'/>
	I1026 00:59:41.681835   27934 main.go:141] libmachine: (ha-300623)     </console>
	I1026 00:59:41.681847   27934 main.go:141] libmachine: (ha-300623)     <rng model='virtio'>
	I1026 00:59:41.681861   27934 main.go:141] libmachine: (ha-300623)       <backend model='random'>/dev/random</backend>
	I1026 00:59:41.681876   27934 main.go:141] libmachine: (ha-300623)     </rng>
	I1026 00:59:41.681884   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681893   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681902   27934 main.go:141] libmachine: (ha-300623)   </devices>
	I1026 00:59:41.681910   27934 main.go:141] libmachine: (ha-300623) </domain>
	I1026 00:59:41.681919   27934 main.go:141] libmachine: (ha-300623) 
	I1026 00:59:41.685794   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:bc:3c:c8 in network default
	I1026 00:59:41.686289   27934 main.go:141] libmachine: (ha-300623) Ensuring networks are active...
	I1026 00:59:41.686312   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:41.686908   27934 main.go:141] libmachine: (ha-300623) Ensuring network default is active
	I1026 00:59:41.687318   27934 main.go:141] libmachine: (ha-300623) Ensuring network mk-ha-300623 is active
	I1026 00:59:41.687714   27934 main.go:141] libmachine: (ha-300623) Getting domain xml...
	I1026 00:59:41.688278   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:42.865174   27934 main.go:141] libmachine: (ha-300623) Waiting to get IP...
	I1026 00:59:42.866030   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:42.866436   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:42.866478   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:42.866424   27957 retry.go:31] will retry after 310.395452ms: waiting for machine to come up
	I1026 00:59:43.178911   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.179377   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.179517   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.179326   27957 retry.go:31] will retry after 258.757335ms: waiting for machine to come up
	I1026 00:59:43.439460   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.439855   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.439883   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.439810   27957 retry.go:31] will retry after 476.137443ms: waiting for machine to come up
	I1026 00:59:43.917472   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.917875   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.917910   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.917853   27957 retry.go:31] will retry after 411.866237ms: waiting for machine to come up
	I1026 00:59:44.331261   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.331762   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.331800   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.331724   27957 retry.go:31] will retry after 639.236783ms: waiting for machine to come up
	I1026 00:59:44.972039   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.972415   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.972443   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.972363   27957 retry.go:31] will retry after 943.318782ms: waiting for machine to come up
	I1026 00:59:45.917370   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:45.917808   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:45.917870   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:45.917775   27957 retry.go:31] will retry after 1.007000764s: waiting for machine to come up
	I1026 00:59:46.926545   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:46.926930   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:46.926955   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:46.926890   27957 retry.go:31] will retry after 905.175073ms: waiting for machine to come up
	I1026 00:59:47.834112   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:47.834468   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:47.834505   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:47.834452   27957 retry.go:31] will retry after 1.696390131s: waiting for machine to come up
	I1026 00:59:49.533204   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:49.533596   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:49.533625   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:49.533577   27957 retry.go:31] will retry after 2.087564363s: waiting for machine to come up
	I1026 00:59:51.622505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:51.622952   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:51.623131   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:51.622900   27957 retry.go:31] will retry after 2.813881441s: waiting for machine to come up
	I1026 00:59:54.439730   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:54.440081   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:54.440111   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:54.440045   27957 retry.go:31] will retry after 2.560428672s: waiting for machine to come up
	I1026 00:59:57.002066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:57.002394   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:57.002424   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:57.002352   27957 retry.go:31] will retry after 3.377744145s: waiting for machine to come up
	I1026 01:00:00.384015   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384460   27934 main.go:141] libmachine: (ha-300623) Found IP for machine: 192.168.39.183
	I1026 01:00:00.384479   27934 main.go:141] libmachine: (ha-300623) Reserving static IP address...
	I1026 01:00:00.384505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has current primary IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384856   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find host DHCP lease matching {name: "ha-300623", mac: "52:54:00:4d:a0:46", ip: "192.168.39.183"} in network mk-ha-300623
	I1026 01:00:00.455221   27934 main.go:141] libmachine: (ha-300623) DBG | Getting to WaitForSSH function...
	I1026 01:00:00.455245   27934 main.go:141] libmachine: (ha-300623) Reserved static IP address: 192.168.39.183
	I1026 01:00:00.455253   27934 main.go:141] libmachine: (ha-300623) Waiting for SSH to be available...
	I1026 01:00:00.457760   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458200   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.458223   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458402   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH client type: external
	I1026 01:00:00.458428   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa (-rw-------)
	I1026 01:00:00.458460   27934 main.go:141] libmachine: (ha-300623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:00.458475   27934 main.go:141] libmachine: (ha-300623) DBG | About to run SSH command:
	I1026 01:00:00.458487   27934 main.go:141] libmachine: (ha-300623) DBG | exit 0
	I1026 01:00:00.585473   27934 main.go:141] libmachine: (ha-300623) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:00.585717   27934 main.go:141] libmachine: (ha-300623) KVM machine creation complete!
	I1026 01:00:00.586041   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:00.586564   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586735   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586856   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:00.586870   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:00.588144   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:00.588156   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:00.588161   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:00.588166   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.590434   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590800   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.590815   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590958   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.591118   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591416   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.591579   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.591799   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.591812   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:00.700544   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:00.700568   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:00.700586   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.703305   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703686   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.703708   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703827   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.704016   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704163   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704286   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.704450   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.704607   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.704617   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:00.813937   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:00.814027   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:00.814042   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:00.814078   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814305   27934 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:00:00.814333   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814495   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.817076   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817394   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.817438   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817578   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.817764   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.817892   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.818015   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.818165   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.818334   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.818344   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:00:00.943069   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:00:00.943097   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.946005   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946325   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.946354   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946524   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.946840   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947004   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947144   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.947328   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.947549   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.947572   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:01.065899   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:01.065958   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:01.066012   27934 buildroot.go:174] setting up certificates
	I1026 01:00:01.066027   27934 provision.go:84] configureAuth start
	I1026 01:00:01.066042   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:01.066285   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.069069   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069397   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.069440   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069574   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.071665   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072025   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.072053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072211   27934 provision.go:143] copyHostCerts
	I1026 01:00:01.072292   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072346   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:01.072359   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072430   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:01.072514   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:01.072540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072577   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:01.072670   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072703   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:01.072711   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072743   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:01.072808   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:00:01.133729   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:01.133783   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:01.133804   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.136311   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136591   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.136617   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136770   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.136937   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.137059   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.137192   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.222921   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:01.222983   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:01.245372   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:01.245444   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:00:01.267891   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:01.267957   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:00:01.289667   27934 provision.go:87] duration metric: took 223.628307ms to configureAuth
	I1026 01:00:01.289699   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:01.289880   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:01.289953   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.292672   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.292982   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.293012   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.293184   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.293375   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293624   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293732   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.293904   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.294111   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.294137   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:01.522070   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:01.522096   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:01.522103   27934 main.go:141] libmachine: (ha-300623) Calling .GetURL
	I1026 01:00:01.523378   27934 main.go:141] libmachine: (ha-300623) DBG | Using libvirt version 6000000
	I1026 01:00:01.525286   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525641   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.525670   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525803   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:01.525822   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:01.525829   27934 client.go:171] duration metric: took 20.337349207s to LocalClient.Create
	I1026 01:00:01.525853   27934 start.go:167] duration metric: took 20.337416513s to libmachine.API.Create "ha-300623"
	I1026 01:00:01.525867   27934 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:00:01.525878   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:01.525899   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.526150   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:01.526178   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.528275   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528583   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.528614   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528742   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.528907   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.529035   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.529169   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.615528   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:01.619526   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:01.619547   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:01.619607   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:01.619676   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:01.619685   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:01.619772   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:01.628818   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:01.651055   27934 start.go:296] duration metric: took 125.175871ms for postStartSetup
	I1026 01:00:01.651106   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:01.651707   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.654048   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654337   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.654358   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654637   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:01.654812   27934 start.go:128] duration metric: took 20.484504528s to createHost
	I1026 01:00:01.654833   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.656877   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657252   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.657277   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657399   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.657609   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657759   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.657999   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.658194   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.658205   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:01.770028   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904401.731044736
	
	I1026 01:00:01.770051   27934 fix.go:216] guest clock: 1729904401.731044736
	I1026 01:00:01.770074   27934 fix.go:229] Guest: 2024-10-26 01:00:01.731044736 +0000 UTC Remote: 2024-10-26 01:00:01.654822884 +0000 UTC m=+20.590184391 (delta=76.221852ms)
	I1026 01:00:01.770101   27934 fix.go:200] guest clock delta is within tolerance: 76.221852ms
	I1026 01:00:01.770108   27934 start.go:83] releasing machines lock for "ha-300623", held for 20.599868049s
	I1026 01:00:01.770184   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.770452   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.772669   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773035   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.773066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773320   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773757   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773942   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.774055   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:01.774095   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.774157   27934 ssh_runner.go:195] Run: cat /version.json
	I1026 01:00:01.774180   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.776503   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776822   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.776846   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776862   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777013   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777160   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777266   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.777287   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777476   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777463   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.777588   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777703   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777819   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.889672   27934 ssh_runner.go:195] Run: systemctl --version
	I1026 01:00:01.895441   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:02.062750   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:02.068559   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:02.068640   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:02.085755   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:02.085784   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:02.085879   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:02.103715   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:02.116629   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:02.116698   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:02.129921   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:02.143297   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:02.262539   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:02.410776   27934 docker.go:233] disabling docker service ...
	I1026 01:00:02.410852   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:02.425252   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:02.438874   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:02.567343   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:02.692382   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:02.705780   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:02.723128   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:02.723196   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.733126   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:02.733204   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.743104   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.752720   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.762245   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:02.772039   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.781522   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.797499   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.807723   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:02.816764   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:02.816838   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:02.830364   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:02.840309   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:02.959488   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:03.048870   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:03.048952   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:03.053750   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:03.053801   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:03.057147   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:03.096489   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:03.096564   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.124313   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.153078   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:03.154469   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:03.157053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157290   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:03.157320   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157571   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:03.161502   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:03.173922   27934 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:00:03.174024   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:03.174067   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:03.205502   27934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 01:00:03.205563   27934 ssh_runner.go:195] Run: which lz4
	I1026 01:00:03.209242   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1026 01:00:03.209334   27934 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:00:03.213268   27934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:00:03.213294   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 01:00:04.450368   27934 crio.go:462] duration metric: took 1.241064009s to copy over tarball
	I1026 01:00:04.450448   27934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:00:06.473538   27934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023056026s)
	I1026 01:00:06.473572   27934 crio.go:469] duration metric: took 2.023171959s to extract the tarball
	I1026 01:00:06.473605   27934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:00:06.509382   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:06.550351   27934 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:00:06.550371   27934 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:00:06.550379   27934 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:00:06.550479   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:06.550540   27934 ssh_runner.go:195] Run: crio config
	I1026 01:00:06.601899   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:06.601920   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:06.601928   27934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:00:06.601953   27934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:00:06.602065   27934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:00:06.602090   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:06.602134   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:06.618905   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:06.619004   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:06.619054   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:06.628422   27934 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:00:06.628482   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:00:06.637507   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:00:06.653506   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:06.669385   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:00:06.685316   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1026 01:00:06.701298   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:06.704780   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:06.716358   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:06.835294   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:06.851617   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:00:06.851643   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:06.851663   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:06.851825   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:06.851928   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:06.851951   27934 certs.go:256] generating profile certs ...
	I1026 01:00:06.852032   27934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:06.852053   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt with IP's: []
	I1026 01:00:07.025844   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt ...
	I1026 01:00:07.025878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt: {Name:mk0969781384c8eb24d904330417d9f7d1f6988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026073   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key ...
	I1026 01:00:07.026087   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key: {Name:mkbd66f66cfdc11b06ed7ee27efeab2c35691371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026190   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a
	I1026 01:00:07.026206   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1026 01:00:07.091648   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a ...
	I1026 01:00:07.091676   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a: {Name:mk79ee9c8c68f427992ae46daac972e5a80d39e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091862   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a ...
	I1026 01:00:07.091878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a: {Name:mk0161ea9da0d9d1941870c52b97be187bff2c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091976   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:07.092075   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:07.092130   27934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:07.092145   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt with IP's: []
	I1026 01:00:07.288723   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt ...
	I1026 01:00:07.288754   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt: {Name:mka585c80540dcf4447ce80873c4b4204a6ac833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.288941   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key ...
	I1026 01:00:07.288955   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key: {Name:mk2a46d0d0037729eebdc4ee5998eb5ddbae3abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.289048   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:07.289071   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:07.289091   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:07.289110   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:07.289128   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:07.289145   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:07.289157   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:07.289174   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:07.289238   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:07.289301   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:07.289321   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:07.289357   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:07.289389   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:07.289437   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:07.289497   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:07.289533   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.289554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.289572   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.290185   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:07.315249   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:07.338589   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:07.361991   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:07.385798   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:00:07.409069   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:00:07.431845   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:07.454880   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:07.477392   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:07.500857   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:07.523684   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:07.546154   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:00:07.562082   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:07.567710   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:07.578511   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582871   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582924   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.588401   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:07.601567   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:07.628525   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634748   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634819   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.643756   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:07.657734   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:07.668305   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672451   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672508   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.677939   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:07.688219   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:07.691924   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:07.691988   27934 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:07.692059   27934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:00:07.692137   27934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:00:07.731345   27934 cri.go:89] found id: ""
	I1026 01:00:07.731417   27934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:00:07.741208   27934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:00:07.750623   27934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:00:07.760311   27934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:00:07.760340   27934 kubeadm.go:157] found existing configuration files:
	
	I1026 01:00:07.760383   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:00:07.769207   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:00:07.769267   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:00:07.778578   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:00:07.787579   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:00:07.787661   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:00:07.797042   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.805955   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:00:07.806016   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.815274   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:00:07.824206   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:00:07.824269   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:00:07.833410   27934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:00:07.938802   27934 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 01:00:07.938923   27934 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:00:08.028635   27934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:00:08.028791   27934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:00:08.028932   27934 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 01:00:08.038844   27934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:00:08.041881   27934 out.go:235]   - Generating certificates and keys ...
	I1026 01:00:08.042903   27934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:00:08.042973   27934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:00:08.315204   27934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:00:08.725495   27934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:00:08.806960   27934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:00:08.984098   27934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:00:09.149484   27934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:00:09.149653   27934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.309448   27934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:00:09.309592   27934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.556294   27934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:00:09.712766   27934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:00:10.018193   27934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:00:10.018258   27934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:00:10.257230   27934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:00:10.645833   27934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 01:00:10.887377   27934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:00:11.179208   27934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:00:11.353056   27934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:00:11.353655   27934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:00:11.356992   27934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:00:11.358796   27934 out.go:235]   - Booting up control plane ...
	I1026 01:00:11.358907   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:00:11.358983   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:00:11.359320   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:00:11.375691   27934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:00:11.384224   27934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:00:11.384282   27934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:00:11.520735   27934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 01:00:11.520904   27934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 01:00:12.022375   27934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.622573ms
	I1026 01:00:12.022456   27934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 01:00:18.050317   27934 kubeadm.go:310] [api-check] The API server is healthy after 6.027294666s
	I1026 01:00:18.065132   27934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:00:18.091049   27934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:00:18.625277   27934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:00:18.625502   27934 kubeadm.go:310] [mark-control-plane] Marking the node ha-300623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:00:18.641286   27934 kubeadm.go:310] [bootstrap-token] Using token: 0x0agx.12z45ob3hq7so0d8
	I1026 01:00:18.642941   27934 out.go:235]   - Configuring RBAC rules ...
	I1026 01:00:18.643084   27934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:00:18.651507   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:00:18.661575   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:00:18.665545   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:00:18.669512   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:00:18.677272   27934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:00:18.691190   27934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:00:18.958591   27934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 01:00:19.464064   27934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 01:00:19.464088   27934 kubeadm.go:310] 
	I1026 01:00:19.464204   27934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 01:00:19.464225   27934 kubeadm.go:310] 
	I1026 01:00:19.464365   27934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 01:00:19.464377   27934 kubeadm.go:310] 
	I1026 01:00:19.464406   27934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 01:00:19.464485   27934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:00:19.464567   27934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:00:19.464579   27934 kubeadm.go:310] 
	I1026 01:00:19.464644   27934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 01:00:19.464655   27934 kubeadm.go:310] 
	I1026 01:00:19.464719   27934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:00:19.464726   27934 kubeadm.go:310] 
	I1026 01:00:19.464814   27934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 01:00:19.464930   27934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:00:19.465024   27934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:00:19.465033   27934 kubeadm.go:310] 
	I1026 01:00:19.465247   27934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:00:19.465347   27934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 01:00:19.465355   27934 kubeadm.go:310] 
	I1026 01:00:19.465464   27934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.465592   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 01:00:19.465626   27934 kubeadm.go:310] 	--control-plane 
	I1026 01:00:19.465634   27934 kubeadm.go:310] 
	I1026 01:00:19.465757   27934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:00:19.465771   27934 kubeadm.go:310] 
	I1026 01:00:19.465887   27934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.466042   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 01:00:19.466324   27934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:00:19.466354   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:19.466370   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:19.468090   27934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:00:19.469492   27934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:00:19.474603   27934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 01:00:19.474628   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 01:00:19.493103   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:00:19.838794   27934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:00:19.838909   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:19.838923   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623 minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=true
	I1026 01:00:19.860886   27934 ops.go:34] apiserver oom_adj: -16
	I1026 01:00:19.991866   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.492140   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.992964   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.492707   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.992237   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.491957   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.992426   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.492181   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.615897   27934 kubeadm.go:1113] duration metric: took 3.777077904s to wait for elevateKubeSystemPrivileges
	I1026 01:00:23.615938   27934 kubeadm.go:394] duration metric: took 15.923953549s to StartCluster
	I1026 01:00:23.615966   27934 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.616076   27934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.616984   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.617268   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:00:23.617267   27934 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:23.617376   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:00:23.617295   27934 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:00:23.617401   27934 addons.go:69] Setting storage-provisioner=true in profile "ha-300623"
	I1026 01:00:23.617447   27934 addons.go:234] Setting addon storage-provisioner=true in "ha-300623"
	I1026 01:00:23.617472   27934 addons.go:69] Setting default-storageclass=true in profile "ha-300623"
	I1026 01:00:23.617485   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.617498   27934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-300623"
	I1026 01:00:23.617505   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:23.617969   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618010   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.618031   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618073   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.633825   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I1026 01:00:23.633917   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1026 01:00:23.634401   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634846   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634864   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.634968   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634988   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.635198   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635332   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635386   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.635834   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.635876   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.637603   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.637812   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:00:23.638218   27934 cert_rotation.go:140] Starting client certificate rotation controller
	I1026 01:00:23.638343   27934 addons.go:234] Setting addon default-storageclass=true in "ha-300623"
	I1026 01:00:23.638387   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.638626   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.638653   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.651480   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I1026 01:00:23.651965   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.652480   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.652510   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.652799   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.652991   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.653021   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I1026 01:00:23.654147   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.654693   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.654718   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.654832   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.655239   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.655791   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.655841   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.656920   27934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:00:23.658814   27934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.658834   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:00:23.658853   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.662101   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662598   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.662632   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662848   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.663049   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.663200   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.663316   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.671976   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 01:00:23.672433   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.672925   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.672950   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.673249   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.673483   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.675058   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.675265   27934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:23.675282   27934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:00:23.675298   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.678185   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678589   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.678611   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678792   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.678957   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.679108   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.679249   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.762178   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:00:23.824448   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.874821   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:24.116804   27934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 01:00:24.301862   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301884   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.301919   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301937   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302185   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302194   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302193   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302200   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302221   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302229   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302239   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302246   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302447   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302464   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302531   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302526   27934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 01:00:24.302571   27934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 01:00:24.302606   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302631   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302680   27934 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1026 01:00:24.302699   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.302706   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.302710   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315108   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:00:24.315658   27934 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1026 01:00:24.315672   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.315679   27934 round_trippers.go:473]     Content-Type: application/json
	I1026 01:00:24.315683   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315686   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.318571   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:00:24.318791   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.318805   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.319072   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.319089   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.319093   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.321441   27934 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:00:24.323036   27934 addons.go:510] duration metric: took 705.743688ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 01:00:24.323074   27934 start.go:246] waiting for cluster config update ...
	I1026 01:00:24.323088   27934 start.go:255] writing updated cluster config ...
	I1026 01:00:24.324580   27934 out.go:201] 
	I1026 01:00:24.325800   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:24.325876   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.327345   27934 out.go:177] * Starting "ha-300623-m02" control-plane node in "ha-300623" cluster
	I1026 01:00:24.329009   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:24.329028   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:00:24.329124   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:00:24.329138   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:00:24.329209   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.329375   27934 start.go:360] acquireMachinesLock for ha-300623-m02: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:00:24.329429   27934 start.go:364] duration metric: took 35.088µs to acquireMachinesLock for "ha-300623-m02"
	I1026 01:00:24.329452   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:24.329544   27934 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1026 01:00:24.330943   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:00:24.331025   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:24.331057   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:24.345495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I1026 01:00:24.346002   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:24.346476   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:24.346491   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:24.346765   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:24.346970   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:24.347113   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:24.347293   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:00:24.347323   27934 client.go:168] LocalClient.Create starting
	I1026 01:00:24.347359   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:00:24.347400   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347421   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347493   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:00:24.347519   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347536   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347559   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:00:24.347568   27934 main.go:141] libmachine: (ha-300623-m02) Calling .PreCreateCheck
	I1026 01:00:24.347721   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:24.348120   27934 main.go:141] libmachine: Creating machine...
	I1026 01:00:24.348135   27934 main.go:141] libmachine: (ha-300623-m02) Calling .Create
	I1026 01:00:24.348260   27934 main.go:141] libmachine: (ha-300623-m02) Creating KVM machine...
	I1026 01:00:24.349505   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing default KVM network
	I1026 01:00:24.349630   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing private KVM network mk-ha-300623
	I1026 01:00:24.349770   27934 main.go:141] libmachine: (ha-300623-m02) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.349806   27934 main.go:141] libmachine: (ha-300623-m02) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:00:24.349877   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.349757   28306 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.349949   27934 main.go:141] libmachine: (ha-300623-m02) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:00:24.581858   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.581729   28306 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa...
	I1026 01:00:24.824457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824338   28306 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk...
	I1026 01:00:24.824488   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing magic tar header
	I1026 01:00:24.824501   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing SSH key tar header
	I1026 01:00:24.824514   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824445   28306 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.824563   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02
	I1026 01:00:24.824601   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:00:24.824632   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.824643   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 (perms=drwx------)
	I1026 01:00:24.824650   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:00:24.824656   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:00:24.824665   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:00:24.824671   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:00:24.824679   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:00:24.824685   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:24.824694   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:00:24.824702   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:00:24.824707   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:00:24.824717   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home
	I1026 01:00:24.824748   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Skipping /home - not owner
	I1026 01:00:24.825705   27934 main.go:141] libmachine: (ha-300623-m02) define libvirt domain using xml: 
	I1026 01:00:24.825725   27934 main.go:141] libmachine: (ha-300623-m02) <domain type='kvm'>
	I1026 01:00:24.825740   27934 main.go:141] libmachine: (ha-300623-m02)   <name>ha-300623-m02</name>
	I1026 01:00:24.825751   27934 main.go:141] libmachine: (ha-300623-m02)   <memory unit='MiB'>2200</memory>
	I1026 01:00:24.825760   27934 main.go:141] libmachine: (ha-300623-m02)   <vcpu>2</vcpu>
	I1026 01:00:24.825769   27934 main.go:141] libmachine: (ha-300623-m02)   <features>
	I1026 01:00:24.825777   27934 main.go:141] libmachine: (ha-300623-m02)     <acpi/>
	I1026 01:00:24.825786   27934 main.go:141] libmachine: (ha-300623-m02)     <apic/>
	I1026 01:00:24.825807   27934 main.go:141] libmachine: (ha-300623-m02)     <pae/>
	I1026 01:00:24.825825   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.825837   27934 main.go:141] libmachine: (ha-300623-m02)   </features>
	I1026 01:00:24.825845   27934 main.go:141] libmachine: (ha-300623-m02)   <cpu mode='host-passthrough'>
	I1026 01:00:24.825850   27934 main.go:141] libmachine: (ha-300623-m02)   
	I1026 01:00:24.825856   27934 main.go:141] libmachine: (ha-300623-m02)   </cpu>
	I1026 01:00:24.825861   27934 main.go:141] libmachine: (ha-300623-m02)   <os>
	I1026 01:00:24.825868   27934 main.go:141] libmachine: (ha-300623-m02)     <type>hvm</type>
	I1026 01:00:24.825873   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='cdrom'/>
	I1026 01:00:24.825880   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='hd'/>
	I1026 01:00:24.825888   27934 main.go:141] libmachine: (ha-300623-m02)     <bootmenu enable='no'/>
	I1026 01:00:24.825901   27934 main.go:141] libmachine: (ha-300623-m02)   </os>
	I1026 01:00:24.825911   27934 main.go:141] libmachine: (ha-300623-m02)   <devices>
	I1026 01:00:24.825922   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='cdrom'>
	I1026 01:00:24.825934   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/boot2docker.iso'/>
	I1026 01:00:24.825942   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hdc' bus='scsi'/>
	I1026 01:00:24.825947   27934 main.go:141] libmachine: (ha-300623-m02)       <readonly/>
	I1026 01:00:24.825955   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.825960   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='disk'>
	I1026 01:00:24.825967   27934 main.go:141] libmachine: (ha-300623-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:00:24.825975   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk'/>
	I1026 01:00:24.825984   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hda' bus='virtio'/>
	I1026 01:00:24.825991   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.826012   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826033   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='mk-ha-300623'/>
	I1026 01:00:24.826045   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826054   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826063   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826074   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='default'/>
	I1026 01:00:24.826082   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826091   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826098   27934 main.go:141] libmachine: (ha-300623-m02)     <serial type='pty'>
	I1026 01:00:24.826107   27934 main.go:141] libmachine: (ha-300623-m02)       <target port='0'/>
	I1026 01:00:24.826112   27934 main.go:141] libmachine: (ha-300623-m02)     </serial>
	I1026 01:00:24.826119   27934 main.go:141] libmachine: (ha-300623-m02)     <console type='pty'>
	I1026 01:00:24.826136   27934 main.go:141] libmachine: (ha-300623-m02)       <target type='serial' port='0'/>
	I1026 01:00:24.826153   27934 main.go:141] libmachine: (ha-300623-m02)     </console>
	I1026 01:00:24.826166   27934 main.go:141] libmachine: (ha-300623-m02)     <rng model='virtio'>
	I1026 01:00:24.826178   27934 main.go:141] libmachine: (ha-300623-m02)       <backend model='random'>/dev/random</backend>
	I1026 01:00:24.826187   27934 main.go:141] libmachine: (ha-300623-m02)     </rng>
	I1026 01:00:24.826194   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826201   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826210   27934 main.go:141] libmachine: (ha-300623-m02)   </devices>
	I1026 01:00:24.826218   27934 main.go:141] libmachine: (ha-300623-m02) </domain>
	I1026 01:00:24.826230   27934 main.go:141] libmachine: (ha-300623-m02) 
	I1026 01:00:24.834328   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:19:9b:85 in network default
	I1026 01:00:24.834898   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring networks are active...
	I1026 01:00:24.834921   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:24.835679   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network default is active
	I1026 01:00:24.836033   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network mk-ha-300623 is active
	I1026 01:00:24.836422   27934 main.go:141] libmachine: (ha-300623-m02) Getting domain xml...
	I1026 01:00:24.837184   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:26.123801   27934 main.go:141] libmachine: (ha-300623-m02) Waiting to get IP...
	I1026 01:00:26.124786   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.125171   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.125213   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.125161   28306 retry.go:31] will retry after 239.473798ms: waiting for machine to come up
	I1026 01:00:26.366497   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.367035   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.367063   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.366991   28306 retry.go:31] will retry after 247.775109ms: waiting for machine to come up
	I1026 01:00:26.616299   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.616749   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.616770   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.616730   28306 retry.go:31] will retry after 304.793231ms: waiting for machine to come up
	I1026 01:00:26.923149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.923677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.923696   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.923618   28306 retry.go:31] will retry after 501.966284ms: waiting for machine to come up
	I1026 01:00:27.427149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.427595   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.427620   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.427557   28306 retry.go:31] will retry after 462.793286ms: waiting for machine to come up
	I1026 01:00:27.892113   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.892649   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.892674   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.892601   28306 retry.go:31] will retry after 627.280628ms: waiting for machine to come up
	I1026 01:00:28.521634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:28.522118   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:28.522154   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:28.522059   28306 retry.go:31] will retry after 1.043043357s: waiting for machine to come up
	I1026 01:00:29.566267   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:29.566670   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:29.566697   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:29.566641   28306 retry.go:31] will retry after 925.497125ms: waiting for machine to come up
	I1026 01:00:30.493367   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:30.493801   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:30.493826   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:30.493760   28306 retry.go:31] will retry after 1.604522192s: waiting for machine to come up
	I1026 01:00:32.100432   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:32.100961   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:32.100982   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:32.100919   28306 retry.go:31] will retry after 2.197958234s: waiting for machine to come up
	I1026 01:00:34.301338   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:34.301864   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:34.301891   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:34.301813   28306 retry.go:31] will retry after 1.917554174s: waiting for machine to come up
	I1026 01:00:36.221440   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:36.221869   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:36.221888   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:36.221830   28306 retry.go:31] will retry after 3.272341592s: waiting for machine to come up
	I1026 01:00:39.496057   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:39.496525   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:39.496555   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:39.496473   28306 retry.go:31] will retry after 3.688097346s: waiting for machine to come up
	I1026 01:00:43.186914   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:43.187251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:43.187284   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:43.187241   28306 retry.go:31] will retry after 5.370855346s: waiting for machine to come up
	I1026 01:00:48.563319   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563799   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563826   27934 main.go:141] libmachine: (ha-300623-m02) Found IP for machine: 192.168.39.62
	I1026 01:00:48.563869   27934 main.go:141] libmachine: (ha-300623-m02) Reserving static IP address...
	I1026 01:00:48.564263   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find host DHCP lease matching {name: "ha-300623-m02", mac: "52:54:00:eb:f2:95", ip: "192.168.39.62"} in network mk-ha-300623
	I1026 01:00:48.642625   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Getting to WaitForSSH function...
	I1026 01:00:48.642658   27934 main.go:141] libmachine: (ha-300623-m02) Reserved static IP address: 192.168.39.62
	I1026 01:00:48.642673   27934 main.go:141] libmachine: (ha-300623-m02) Waiting for SSH to be available...
	I1026 01:00:48.645214   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645726   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.645751   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645908   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH client type: external
	I1026 01:00:48.645957   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa (-rw-------)
	I1026 01:00:48.645990   27934 main.go:141] libmachine: (ha-300623-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:48.646004   27934 main.go:141] libmachine: (ha-300623-m02) DBG | About to run SSH command:
	I1026 01:00:48.646022   27934 main.go:141] libmachine: (ha-300623-m02) DBG | exit 0
	I1026 01:00:48.773437   27934 main.go:141] libmachine: (ha-300623-m02) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:48.773671   27934 main.go:141] libmachine: (ha-300623-m02) KVM machine creation complete!
	I1026 01:00:48.773985   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:48.774531   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774718   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774839   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:48.774863   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetState
	I1026 01:00:48.776153   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:48.776168   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:48.776176   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:48.776184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.778481   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778857   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.778884   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778991   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.779164   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779300   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779402   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.779538   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.779788   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.779807   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:48.896727   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:48.896751   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:48.896762   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.899398   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899741   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.899779   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899885   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.900047   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900289   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.900414   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.900617   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.900631   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:49.017846   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:49.017965   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:49.017981   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:49.017993   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018219   27934 buildroot.go:166] provisioning hostname "ha-300623-m02"
	I1026 01:00:49.018266   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018441   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.021311   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022133   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.022168   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022362   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.022542   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022691   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022833   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.022971   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.023157   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.023181   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m02 && echo "ha-300623-m02" | sudo tee /etc/hostname
	I1026 01:00:49.154863   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m02
	
	I1026 01:00:49.154891   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.157409   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.157924   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.157965   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.158127   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.158313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158463   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158583   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.158721   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.158874   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.158890   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:49.281279   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:49.281312   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:49.281349   27934 buildroot.go:174] setting up certificates
	I1026 01:00:49.281361   27934 provision.go:84] configureAuth start
	I1026 01:00:49.281370   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.281641   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.284261   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.284660   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284785   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.286954   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287298   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.287326   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287470   27934 provision.go:143] copyHostCerts
	I1026 01:00:49.287501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287544   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:49.287555   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287640   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:49.287745   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287775   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:49.287788   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287835   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:49.287908   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287934   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:49.287941   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287990   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:49.288059   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m02 san=[127.0.0.1 192.168.39.62 ha-300623-m02 localhost minikube]
	I1026 01:00:49.407467   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:49.407520   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:49.407552   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.410082   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410436   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.410457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410696   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.410880   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.411041   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.411166   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.495389   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:49.495471   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:49.520501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:49.520571   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:00:49.544170   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:49.544266   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:00:49.567939   27934 provision.go:87] duration metric: took 286.565797ms to configureAuth
	I1026 01:00:49.567967   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:49.568139   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:49.568207   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.570619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.570975   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.571000   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.571206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.571396   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571565   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571706   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.571875   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.572093   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.572115   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:49.802107   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:49.802136   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:49.802143   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetURL
	I1026 01:00:49.803331   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using libvirt version 6000000
	I1026 01:00:49.805234   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805565   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.805594   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805716   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:49.805729   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:49.805746   27934 client.go:171] duration metric: took 25.458413075s to LocalClient.Create
	I1026 01:00:49.805769   27934 start.go:167] duration metric: took 25.45847781s to libmachine.API.Create "ha-300623"
	I1026 01:00:49.805779   27934 start.go:293] postStartSetup for "ha-300623-m02" (driver="kvm2")
	I1026 01:00:49.805791   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:49.805808   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:49.806042   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:49.806065   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.808068   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808407   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.808434   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808582   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.808773   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.808963   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.809100   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.895521   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:49.899409   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:49.899435   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:49.899514   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:49.899627   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:49.899639   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:49.899762   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:49.908849   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:49.931119   27934 start.go:296] duration metric: took 125.326962ms for postStartSetup
	I1026 01:00:49.931168   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:49.931760   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.934318   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934656   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.934677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934971   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:49.935199   27934 start.go:128] duration metric: took 25.605643958s to createHost
	I1026 01:00:49.935242   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.937348   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937642   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.937668   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937766   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.937916   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938069   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938232   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.938387   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.938577   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.938589   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:50.054126   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904450.033939767
	
	I1026 01:00:50.054149   27934 fix.go:216] guest clock: 1729904450.033939767
	I1026 01:00:50.054158   27934 fix.go:229] Guest: 2024-10-26 01:00:50.033939767 +0000 UTC Remote: 2024-10-26 01:00:49.935212743 +0000 UTC m=+68.870574304 (delta=98.727024ms)
	I1026 01:00:50.054179   27934 fix.go:200] guest clock delta is within tolerance: 98.727024ms
	I1026 01:00:50.054185   27934 start.go:83] releasing machines lock for "ha-300623-m02", held for 25.72474455s
	I1026 01:00:50.054206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.054478   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:50.057251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.057634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.057666   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.060016   27934 out.go:177] * Found network options:
	I1026 01:00:50.061125   27934 out.go:177]   - NO_PROXY=192.168.39.183
	W1026 01:00:50.062183   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.062255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062824   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062979   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.063068   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:50.063107   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	W1026 01:00:50.063196   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.063287   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:50.063313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:50.065732   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.065764   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066105   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066132   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066157   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066172   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066343   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066466   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066529   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066613   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066757   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.066776   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066891   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.300821   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:50.306327   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:50.306383   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:50.322223   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:50.322250   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:50.322315   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:50.338468   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:50.351846   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:50.351912   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:50.366331   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:50.380253   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:50.506965   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:50.668001   27934 docker.go:233] disabling docker service ...
	I1026 01:00:50.668069   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:50.682592   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:50.695962   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:50.824939   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:50.938022   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:50.952273   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:50.970167   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:50.970223   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.980486   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:50.980547   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.991006   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.001215   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.011378   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:51.021477   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.031248   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.047066   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.056669   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:51.065644   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:51.065713   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:51.077591   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:51.086612   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:51.190831   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:51.272466   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:51.272541   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:51.277536   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:51.277595   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:51.281084   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:51.316243   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:51.316339   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.344007   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.373231   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:51.374904   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:00:51.375971   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:51.378647   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.378955   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:51.378984   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.379181   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:51.383229   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:51.395396   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:00:51.395665   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:51.395979   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.396021   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.411495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I1026 01:00:51.412012   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.412465   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.412492   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.412809   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.413020   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:51.414616   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:51.414900   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.414943   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.429345   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1026 01:00:51.429857   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.430394   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.430414   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.430718   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.430932   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:51.431063   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.62
	I1026 01:00:51.431072   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:51.431085   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.431231   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:51.431297   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:51.431310   27934 certs.go:256] generating profile certs ...
	I1026 01:00:51.431379   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:51.431404   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab
	I1026 01:00:51.431417   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.254]
	I1026 01:00:51.551653   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab ...
	I1026 01:00:51.551682   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab: {Name:mk7f84df361678f6c264c35c7a54837d967e14ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551843   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab ...
	I1026 01:00:51.551855   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab: {Name:mkd389918e7eb8b1c88d8cee260e577971075312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551931   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:51.552066   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:51.552188   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:51.552202   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:51.552214   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:51.552227   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:51.552240   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:51.552251   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:51.552262   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:51.552275   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:51.552287   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:51.552335   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:51.552366   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:51.552375   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:51.552397   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:51.552420   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:51.552441   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:51.552479   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:51.552504   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:51.552517   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:51.552529   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:51.552559   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:51.555385   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555741   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:51.555776   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555946   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:51.556121   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:51.556266   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:51.556384   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:51.633868   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:00:51.638556   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:00:51.651311   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:00:51.655533   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:00:51.667970   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:00:51.671912   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:00:51.681736   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:00:51.685589   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:00:51.695314   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:00:51.699011   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:00:51.709409   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:00:51.713200   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:00:51.722473   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:51.745687   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:51.767846   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:51.789516   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:51.811259   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:00:51.833028   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:00:51.856110   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:51.879410   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:51.905258   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:51.929159   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:51.951850   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:51.976197   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:00:51.991793   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:00:52.007237   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:00:52.023097   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:00:52.038541   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:00:52.053670   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:00:52.068858   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:00:52.084534   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:52.089743   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:52.099587   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103529   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103574   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.108773   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:52.118562   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:52.128439   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132388   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132437   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.137609   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:52.147519   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:52.157786   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162186   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162230   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.167650   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:52.179201   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:52.183712   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:52.183765   27934 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.31.2 crio true true} ...
	I1026 01:00:52.183873   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:52.183908   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:52.183953   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:52.201496   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:52.201565   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:52.201625   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.212390   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:00:52.212439   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.223416   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:00:52.223436   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223483   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223536   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1026 01:00:52.223555   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1026 01:00:52.227638   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:00:52.227662   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:00:53.105621   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.105715   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.110408   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:00:53.110445   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:00:53.233007   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:00:53.274448   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.274566   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.294441   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:00:53.294487   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:00:53.654866   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:00:53.664222   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 01:00:53.679840   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:53.695653   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:00:53.711652   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:53.715553   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:53.727360   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:53.853122   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:53.869765   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:53.870266   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:53.870326   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:53.886042   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1026 01:00:53.886641   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:53.887219   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:53.887243   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:53.887613   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:53.887814   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:53.887974   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:53.888094   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:00:53.888116   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:53.891569   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892007   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:53.892034   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892213   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:53.892359   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:53.892504   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:53.892700   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:54.059992   27934 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:54.060032   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I1026 01:01:15.752497   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (21.692442996s)
	I1026 01:01:15.752534   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:01:16.303360   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m02 minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:01:16.453258   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:01:16.592863   27934 start.go:319] duration metric: took 22.704885851s to joinCluster
	I1026 01:01:16.592954   27934 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:16.593288   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:16.594650   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:01:16.596091   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:01:16.850259   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:01:16.885786   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:01:16.886030   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:01:16.886096   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:01:16.886309   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:16.886394   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:16.886406   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:16.886416   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:16.886421   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:16.901951   27934 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1026 01:01:17.386830   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.386852   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.386859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.386867   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.391117   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:17.886726   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.886752   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.886769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.886774   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.891812   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:18.386816   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.386836   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.386844   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.386849   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.389277   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:18.887322   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.887345   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.887354   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.887359   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.890950   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:18.891497   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:19.386717   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.386741   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.386752   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.386757   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.389841   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:19.886538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.886562   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.886569   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.886573   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.889883   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.386728   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.386753   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.386764   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.386770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.392483   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:20.887438   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.887464   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.887474   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.887480   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.891169   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.891590   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:21.386734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.386758   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.386770   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.386778   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.389970   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:21.886824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.886849   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.886859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.886865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.891560   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.386652   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.386674   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.386682   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.386686   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.391520   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.887482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.887508   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.887524   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.887529   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.891155   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:22.891643   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:23.387538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.387567   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.387578   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.387585   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.390499   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:23.886601   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.886627   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.886637   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.886647   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.890054   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.387524   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.387553   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.387564   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.387570   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.390618   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.886521   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.886550   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.886561   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.886567   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.889985   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.386794   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.386822   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.386831   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.386838   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.390108   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.390691   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:25.887094   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.887116   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.887124   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.887128   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.890067   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:26.387517   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.387537   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.387545   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.387550   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.391065   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:26.886664   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.886688   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.886698   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.886703   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.889958   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.386821   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.386850   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.386860   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.386865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.389901   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.886863   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.886892   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.886901   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.886904   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.890223   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.890712   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:28.387256   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.387286   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.387297   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.387304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.391313   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:28.887398   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.887423   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.887431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.887435   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.891415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.387299   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.387320   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.387328   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.387333   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.394125   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:01:29.886896   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.886918   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.886926   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.886928   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.890460   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.891101   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:30.386473   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.386494   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.386505   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.386512   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.389574   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:30.886604   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.886631   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.886640   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.886644   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.890190   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.386924   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.386949   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.386959   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.386966   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.399297   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:01:31.887213   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.887236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.887243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.887250   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.890605   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.891200   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:32.386487   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.386513   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.386523   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.386530   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.389962   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:32.886975   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.887003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.887016   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.887021   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.890088   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.386916   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.386938   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.386946   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.386950   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.390776   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.886708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.886731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.886742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.886747   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.890420   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.890962   27934 node_ready.go:49] node "ha-300623-m02" has status "Ready":"True"
	I1026 01:01:33.890985   27934 node_ready.go:38] duration metric: took 17.004659759s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:33.890996   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:33.891090   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:33.891103   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.891113   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.891118   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.895593   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:33.901510   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.901584   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:01:33.901593   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.901599   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.901603   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.904838   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.905632   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.905646   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.905653   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.905662   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.908670   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.909108   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.909125   27934 pod_ready.go:82] duration metric: took 7.593244ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909134   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909228   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:01:33.909236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.909243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.909246   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.911623   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.912324   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.912342   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.912351   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.912356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.914836   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.915526   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.915582   27934 pod_ready.go:82] duration metric: took 6.44095ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915619   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:01:33.915720   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.915730   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.915737   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.918774   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.919308   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.919323   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.919332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.919337   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.921541   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.921916   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.921932   27934 pod_ready.go:82] duration metric: took 6.293574ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921944   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921993   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:01:33.922003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.922013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.922020   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.924042   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.924574   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.924592   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.924620   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.924630   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.926627   27934 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:01:33.927009   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.927026   27934 pod_ready.go:82] duration metric: took 5.07473ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.927043   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.087429   27934 request.go:632] Waited for 160.309698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087488   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087496   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.087507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.087517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.093218   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:34.287260   27934 request.go:632] Waited for 193.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287335   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287346   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.287356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.287367   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.290680   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.291257   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.291280   27934 pod_ready.go:82] duration metric: took 364.229033ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.291293   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.487643   27934 request.go:632] Waited for 196.274187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487743   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487757   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.487769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.487776   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.490314   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:34.687266   27934 request.go:632] Waited for 196.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687319   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687325   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.687332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.687336   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.690681   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.691098   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.691116   27934 pod_ready.go:82] duration metric: took 399.816191ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.691125   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.887235   27934 request.go:632] Waited for 196.048043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887286   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887292   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.887299   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.887304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.890298   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.087251   27934 request.go:632] Waited for 196.393455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087304   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087311   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.087320   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.087327   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.096042   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:01:35.096481   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.096497   27934 pod_ready.go:82] duration metric: took 405.365113ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.096507   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.287575   27934 request.go:632] Waited for 190.95439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287635   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287641   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.287656   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.287664   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.290956   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.486850   27934 request.go:632] Waited for 195.295178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486901   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486907   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.486914   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.486918   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.489791   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.490490   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.490509   27934 pod_ready.go:82] duration metric: took 393.992807ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.490519   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.687677   27934 request.go:632] Waited for 197.085878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687739   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.687747   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.687751   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.690861   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.886824   27934 request.go:632] Waited for 195.303807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886902   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886908   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.886915   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.886919   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.890003   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.890588   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.890610   27934 pod_ready.go:82] duration metric: took 400.083533ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.890620   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.087724   27934 request.go:632] Waited for 197.035019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087799   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087807   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.087817   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.087823   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.090987   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.287060   27934 request.go:632] Waited for 195.34906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287112   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287118   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.287126   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.287130   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.290355   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.290978   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.291000   27934 pod_ready.go:82] duration metric: took 400.372479ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.291014   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.486971   27934 request.go:632] Waited for 195.883358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487050   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487059   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.487068   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.487073   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.491124   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:36.686937   27934 request.go:632] Waited for 195.292838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686992   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686998   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.687005   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.687009   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.689912   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:36.690462   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.690479   27934 pod_ready.go:82] duration metric: took 399.458178ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.690490   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.887645   27934 request.go:632] Waited for 197.093805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887721   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.887742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.887752   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.892972   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:37.086834   27934 request.go:632] Waited for 193.310036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086917   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086924   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.086935   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.086940   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.091462   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.091914   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:37.091933   27934 pod_ready.go:82] duration metric: took 401.437262ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:37.091944   27934 pod_ready.go:39] duration metric: took 3.20092896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:37.091963   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:01:37.092013   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:01:37.107184   27934 api_server.go:72] duration metric: took 20.514182215s to wait for apiserver process to appear ...
	I1026 01:01:37.107232   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:01:37.107251   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:01:37.112416   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:01:37.112504   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:01:37.112517   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.112528   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.112539   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.113540   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:01:37.113668   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:01:37.113698   27934 api_server.go:131] duration metric: took 6.458284ms to wait for apiserver health ...
	I1026 01:01:37.113710   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:01:37.287117   27934 request.go:632] Waited for 173.325695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287206   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287218   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.287229   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.287237   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.291660   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.296191   27934 system_pods.go:59] 17 kube-system pods found
	I1026 01:01:37.296219   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.296224   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.296228   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.296232   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.296235   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.296238   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.296241   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.296244   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.296248   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.296251   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.296254   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.296257   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.296260   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.296263   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.296266   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.296269   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.296272   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.296277   27934 system_pods.go:74] duration metric: took 182.559653ms to wait for pod list to return data ...
	I1026 01:01:37.296287   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:01:37.487718   27934 request.go:632] Waited for 191.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487776   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.487783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.487787   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.491586   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.491857   27934 default_sa.go:45] found service account: "default"
	I1026 01:01:37.491878   27934 default_sa.go:55] duration metric: took 195.585476ms for default service account to be created ...
	I1026 01:01:37.491887   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:01:37.687316   27934 request.go:632] Waited for 195.344627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687371   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687376   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.687383   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.687387   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.691369   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.696949   27934 system_pods.go:86] 17 kube-system pods found
	I1026 01:01:37.696973   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.696979   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.696983   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.696988   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.696991   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.696995   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.696999   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.697003   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.697006   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.697010   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.697014   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.697018   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.697021   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.697028   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.697031   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.697034   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.697036   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.697042   27934 system_pods.go:126] duration metric: took 205.150542ms to wait for k8s-apps to be running ...
	I1026 01:01:37.697052   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:01:37.697091   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:01:37.712370   27934 system_svc.go:56] duration metric: took 15.306195ms WaitForService to wait for kubelet
	I1026 01:01:37.712402   27934 kubeadm.go:582] duration metric: took 21.119406025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:01:37.712420   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:01:37.886735   27934 request.go:632] Waited for 174.248578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886856   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886868   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.886878   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.886887   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.890795   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.891473   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891497   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891509   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891513   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891517   27934 node_conditions.go:105] duration metric: took 179.092926ms to run NodePressure ...
	I1026 01:01:37.891528   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:01:37.891553   27934 start.go:255] writing updated cluster config ...
	I1026 01:01:37.893974   27934 out.go:201] 
	I1026 01:01:37.895579   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:37.895693   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.897785   27934 out.go:177] * Starting "ha-300623-m03" control-plane node in "ha-300623" cluster
	I1026 01:01:37.898981   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:01:37.899006   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:01:37.899114   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:01:37.899125   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:01:37.899210   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.900601   27934 start.go:360] acquireMachinesLock for ha-300623-m03: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:01:37.900662   27934 start.go:364] duration metric: took 37.924µs to acquireMachinesLock for "ha-300623-m03"
	I1026 01:01:37.900681   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:37.900777   27934 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1026 01:01:37.902482   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:01:37.902577   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:01:37.902616   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:01:37.917489   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1026 01:01:37.918010   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:01:37.918524   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:01:37.918546   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:01:37.918854   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:01:37.919023   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:01:37.919164   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:01:37.919300   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:01:37.919332   27934 client.go:168] LocalClient.Create starting
	I1026 01:01:37.919365   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:01:37.919401   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919415   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919461   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:01:37.919481   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919492   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919511   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:01:37.919519   27934 main.go:141] libmachine: (ha-300623-m03) Calling .PreCreateCheck
	I1026 01:01:37.919665   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:01:37.920059   27934 main.go:141] libmachine: Creating machine...
	I1026 01:01:37.920075   27934 main.go:141] libmachine: (ha-300623-m03) Calling .Create
	I1026 01:01:37.920211   27934 main.go:141] libmachine: (ha-300623-m03) Creating KVM machine...
	I1026 01:01:37.921465   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing default KVM network
	I1026 01:01:37.921611   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing private KVM network mk-ha-300623
	I1026 01:01:37.921761   27934 main.go:141] libmachine: (ha-300623-m03) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:37.921786   27934 main.go:141] libmachine: (ha-300623-m03) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:01:37.921849   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:37.921742   28699 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:37.921948   27934 main.go:141] libmachine: (ha-300623-m03) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:01:38.168295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.168154   28699 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa...
	I1026 01:01:38.291085   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.290967   28699 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk...
	I1026 01:01:38.291115   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing magic tar header
	I1026 01:01:38.291125   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing SSH key tar header
	I1026 01:01:38.291132   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.291098   28699 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:38.291249   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03
	I1026 01:01:38.291280   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 (perms=drwx------)
	I1026 01:01:38.291294   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:01:38.291307   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:38.291313   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:01:38.291323   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:01:38.291330   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:01:38.291340   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home
	I1026 01:01:38.291363   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:01:38.291374   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Skipping /home - not owner
	I1026 01:01:38.291387   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:01:38.291395   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:01:38.291403   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:01:38.291411   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:01:38.291417   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:38.292244   27934 main.go:141] libmachine: (ha-300623-m03) define libvirt domain using xml: 
	I1026 01:01:38.292268   27934 main.go:141] libmachine: (ha-300623-m03) <domain type='kvm'>
	I1026 01:01:38.292276   27934 main.go:141] libmachine: (ha-300623-m03)   <name>ha-300623-m03</name>
	I1026 01:01:38.292281   27934 main.go:141] libmachine: (ha-300623-m03)   <memory unit='MiB'>2200</memory>
	I1026 01:01:38.292286   27934 main.go:141] libmachine: (ha-300623-m03)   <vcpu>2</vcpu>
	I1026 01:01:38.292290   27934 main.go:141] libmachine: (ha-300623-m03)   <features>
	I1026 01:01:38.292296   27934 main.go:141] libmachine: (ha-300623-m03)     <acpi/>
	I1026 01:01:38.292303   27934 main.go:141] libmachine: (ha-300623-m03)     <apic/>
	I1026 01:01:38.292314   27934 main.go:141] libmachine: (ha-300623-m03)     <pae/>
	I1026 01:01:38.292320   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292330   27934 main.go:141] libmachine: (ha-300623-m03)   </features>
	I1026 01:01:38.292336   27934 main.go:141] libmachine: (ha-300623-m03)   <cpu mode='host-passthrough'>
	I1026 01:01:38.292375   27934 main.go:141] libmachine: (ha-300623-m03)   
	I1026 01:01:38.292393   27934 main.go:141] libmachine: (ha-300623-m03)   </cpu>
	I1026 01:01:38.292406   27934 main.go:141] libmachine: (ha-300623-m03)   <os>
	I1026 01:01:38.292421   27934 main.go:141] libmachine: (ha-300623-m03)     <type>hvm</type>
	I1026 01:01:38.292439   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='cdrom'/>
	I1026 01:01:38.292484   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='hd'/>
	I1026 01:01:38.292496   27934 main.go:141] libmachine: (ha-300623-m03)     <bootmenu enable='no'/>
	I1026 01:01:38.292505   27934 main.go:141] libmachine: (ha-300623-m03)   </os>
	I1026 01:01:38.292533   27934 main.go:141] libmachine: (ha-300623-m03)   <devices>
	I1026 01:01:38.292552   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='cdrom'>
	I1026 01:01:38.292569   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/boot2docker.iso'/>
	I1026 01:01:38.292579   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hdc' bus='scsi'/>
	I1026 01:01:38.292598   27934 main.go:141] libmachine: (ha-300623-m03)       <readonly/>
	I1026 01:01:38.292607   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292617   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='disk'>
	I1026 01:01:38.292641   27934 main.go:141] libmachine: (ha-300623-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:01:38.292657   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk'/>
	I1026 01:01:38.292667   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hda' bus='virtio'/>
	I1026 01:01:38.292685   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292699   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292713   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='mk-ha-300623'/>
	I1026 01:01:38.292722   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292731   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292740   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292749   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='default'/>
	I1026 01:01:38.292759   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292790   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292812   27934 main.go:141] libmachine: (ha-300623-m03)     <serial type='pty'>
	I1026 01:01:38.292821   27934 main.go:141] libmachine: (ha-300623-m03)       <target port='0'/>
	I1026 01:01:38.292825   27934 main.go:141] libmachine: (ha-300623-m03)     </serial>
	I1026 01:01:38.292832   27934 main.go:141] libmachine: (ha-300623-m03)     <console type='pty'>
	I1026 01:01:38.292837   27934 main.go:141] libmachine: (ha-300623-m03)       <target type='serial' port='0'/>
	I1026 01:01:38.292843   27934 main.go:141] libmachine: (ha-300623-m03)     </console>
	I1026 01:01:38.292851   27934 main.go:141] libmachine: (ha-300623-m03)     <rng model='virtio'>
	I1026 01:01:38.292862   27934 main.go:141] libmachine: (ha-300623-m03)       <backend model='random'>/dev/random</backend>
	I1026 01:01:38.292871   27934 main.go:141] libmachine: (ha-300623-m03)     </rng>
	I1026 01:01:38.292879   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292887   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292907   27934 main.go:141] libmachine: (ha-300623-m03)   </devices>
	I1026 01:01:38.292927   27934 main.go:141] libmachine: (ha-300623-m03) </domain>
	I1026 01:01:38.292944   27934 main.go:141] libmachine: (ha-300623-m03) 
	I1026 01:01:38.300030   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:59:6f:46 in network default
	I1026 01:01:38.300611   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring networks are active...
	I1026 01:01:38.300639   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:38.301325   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network default is active
	I1026 01:01:38.301614   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network mk-ha-300623 is active
	I1026 01:01:38.301965   27934 main.go:141] libmachine: (ha-300623-m03) Getting domain xml...
	I1026 01:01:38.302564   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:39.541523   27934 main.go:141] libmachine: (ha-300623-m03) Waiting to get IP...
	I1026 01:01:39.542453   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.542916   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.542942   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.542887   28699 retry.go:31] will retry after 281.419322ms: waiting for machine to come up
	I1026 01:01:39.826321   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.826750   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.826778   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.826737   28699 retry.go:31] will retry after 326.383367ms: waiting for machine to come up
	I1026 01:01:40.155076   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.155490   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.155515   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.155448   28699 retry.go:31] will retry after 321.43703ms: waiting for machine to come up
	I1026 01:01:40.479066   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.479512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.479541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.479464   28699 retry.go:31] will retry after 585.906236ms: waiting for machine to come up
	I1026 01:01:41.068220   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.068712   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.068740   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.068671   28699 retry.go:31] will retry after 528.538636ms: waiting for machine to come up
	I1026 01:01:41.598430   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.599018   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.599040   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.598979   28699 retry.go:31] will retry after 646.897359ms: waiting for machine to come up
	I1026 01:01:42.247537   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:42.247952   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:42.247977   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:42.247889   28699 retry.go:31] will retry after 982.424553ms: waiting for machine to come up
	I1026 01:01:43.231997   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:43.232498   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:43.232526   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:43.232426   28699 retry.go:31] will retry after 920.160573ms: waiting for machine to come up
	I1026 01:01:44.154517   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:44.155015   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:44.155041   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:44.154974   28699 retry.go:31] will retry after 1.233732499s: waiting for machine to come up
	I1026 01:01:45.390175   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:45.390649   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:45.390676   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:45.390595   28699 retry.go:31] will retry after 2.305424014s: waiting for machine to come up
	I1026 01:01:47.698485   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:47.698913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:47.698936   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:47.698861   28699 retry.go:31] will retry after 2.109217289s: waiting for machine to come up
	I1026 01:01:49.810556   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:49.811065   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:49.811095   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:49.811021   28699 retry.go:31] will retry after 3.235213993s: waiting for machine to come up
	I1026 01:01:53.047405   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:53.047859   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:53.047896   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:53.047798   28699 retry.go:31] will retry after 2.928776248s: waiting for machine to come up
	I1026 01:01:55.979004   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:55.979474   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:55.979500   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:55.979422   28699 retry.go:31] will retry after 4.662153221s: waiting for machine to come up
	I1026 01:02:00.643538   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644004   27934 main.go:141] libmachine: (ha-300623-m03) Found IP for machine: 192.168.39.180
	I1026 01:02:00.644032   27934 main.go:141] libmachine: (ha-300623-m03) Reserving static IP address...
	I1026 01:02:00.644046   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has current primary IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644407   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find host DHCP lease matching {name: "ha-300623-m03", mac: "52:54:00:c1:38:db", ip: "192.168.39.180"} in network mk-ha-300623
	I1026 01:02:00.720512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Getting to WaitForSSH function...
	I1026 01:02:00.720543   27934 main.go:141] libmachine: (ha-300623-m03) Reserved static IP address: 192.168.39.180
	I1026 01:02:00.720555   27934 main.go:141] libmachine: (ha-300623-m03) Waiting for SSH to be available...
	I1026 01:02:00.723096   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723544   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.723574   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723782   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH client type: external
	I1026 01:02:00.723802   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa (-rw-------)
	I1026 01:02:00.723832   27934 main.go:141] libmachine: (ha-300623-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:02:00.723848   27934 main.go:141] libmachine: (ha-300623-m03) DBG | About to run SSH command:
	I1026 01:02:00.723870   27934 main.go:141] libmachine: (ha-300623-m03) DBG | exit 0
	I1026 01:02:00.849883   27934 main.go:141] libmachine: (ha-300623-m03) DBG | SSH cmd err, output: <nil>: 
	I1026 01:02:00.850375   27934 main.go:141] libmachine: (ha-300623-m03) KVM machine creation complete!
	I1026 01:02:00.850699   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:00.851242   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851412   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851548   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:02:00.851566   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetState
	I1026 01:02:00.852882   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:02:00.852898   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:02:00.852910   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:02:00.852920   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.855365   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.855806   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.855828   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.856011   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.856209   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856384   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856518   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.856737   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.856963   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.856977   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:02:00.960586   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:00.960610   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:02:00.960620   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.963489   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.963835   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.963855   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.964027   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.964212   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964377   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964520   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.964689   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.964839   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.964850   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:02:01.070154   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:02:01.070243   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:02:01.070253   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:02:01.070260   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070494   27934 buildroot.go:166] provisioning hostname "ha-300623-m03"
	I1026 01:02:01.070509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070670   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.073236   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073643   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.073674   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.074025   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074141   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074309   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.074462   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.074668   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.074685   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m03 && echo "ha-300623-m03" | sudo tee /etc/hostname
	I1026 01:02:01.191755   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m03
	
	I1026 01:02:01.191785   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.194565   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.194928   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.194957   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.195106   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.195276   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195444   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195582   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.195873   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.196084   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.196105   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:02:01.305994   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:01.306027   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:02:01.306044   27934 buildroot.go:174] setting up certificates
	I1026 01:02:01.306053   27934 provision.go:84] configureAuth start
	I1026 01:02:01.306066   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.306391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.308943   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309271   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.309299   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309440   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.311607   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.311976   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.312003   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.312212   27934 provision.go:143] copyHostCerts
	I1026 01:02:01.312245   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312277   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:02:01.312286   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312350   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:02:01.312423   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312441   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:02:01.312445   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312471   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:02:01.312516   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:02:01.312540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312560   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:02:01.312651   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m03 san=[127.0.0.1 192.168.39.180 ha-300623-m03 localhost minikube]
	I1026 01:02:01.465531   27934 provision.go:177] copyRemoteCerts
	I1026 01:02:01.465583   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:02:01.465608   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.468185   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468506   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.468531   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468753   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.468983   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.469158   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.469293   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.551550   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:02:01.551614   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:02:01.576554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:02:01.576635   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:02:01.602350   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:02:01.602435   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:02:01.626219   27934 provision.go:87] duration metric: took 320.153705ms to configureAuth
	I1026 01:02:01.626250   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:02:01.626469   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:01.626540   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.629202   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.629569   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629826   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.630038   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630193   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630349   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.630520   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.630681   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.630695   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:02:01.850626   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:02:01.850656   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:02:01.850666   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetURL
	I1026 01:02:01.851985   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using libvirt version 6000000
	I1026 01:02:01.853953   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854248   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.854277   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854395   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:02:01.854410   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:02:01.854416   27934 client.go:171] duration metric: took 23.935075321s to LocalClient.Create
	I1026 01:02:01.854435   27934 start.go:167] duration metric: took 23.935138215s to libmachine.API.Create "ha-300623"
	I1026 01:02:01.854442   27934 start.go:293] postStartSetup for "ha-300623-m03" (driver="kvm2")
	I1026 01:02:01.854455   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:02:01.854473   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:01.854694   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:02:01.854714   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.856743   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857033   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.857061   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857181   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.857358   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.857509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.857636   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.939727   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:02:01.943512   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:02:01.943536   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:02:01.943602   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:02:01.943673   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:02:01.943683   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:02:01.943769   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:02:01.952556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:01.974588   27934 start.go:296] duration metric: took 120.131633ms for postStartSetup
	I1026 01:02:01.974635   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:01.975249   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.977630   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.977939   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.977966   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.978201   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:02:01.978439   27934 start.go:128] duration metric: took 24.077650452s to createHost
	I1026 01:02:01.978471   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.981153   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981663   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.981690   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981836   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.981994   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982159   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982318   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.982480   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.982694   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.982711   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:02:02.085984   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904522.063699456
	
	I1026 01:02:02.086012   27934 fix.go:216] guest clock: 1729904522.063699456
	I1026 01:02:02.086022   27934 fix.go:229] Guest: 2024-10-26 01:02:02.063699456 +0000 UTC Remote: 2024-10-26 01:02:01.978456379 +0000 UTC m=+140.913817945 (delta=85.243077ms)
	I1026 01:02:02.086043   27934 fix.go:200] guest clock delta is within tolerance: 85.243077ms
	I1026 01:02:02.086049   27934 start.go:83] releasing machines lock for "ha-300623-m03", held for 24.185376811s
	I1026 01:02:02.086067   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.086287   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:02.088913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.089268   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.089295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.091504   27934 out.go:177] * Found network options:
	I1026 01:02:02.092955   27934 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.62
	W1026 01:02:02.094206   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.094236   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.094251   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094989   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.095095   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:02:02.095133   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	W1026 01:02:02.095154   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.095180   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.095247   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:02:02.095268   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:02.097751   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098028   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098086   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098111   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098235   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098497   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098514   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098524   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.098666   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.098717   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098843   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098984   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.099112   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.334862   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:02:02.340486   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:02:02.340547   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:02:02.357805   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:02:02.357834   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:02:02.357898   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:02:02.374996   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:02:02.392000   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:02:02.392086   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:02:02.407807   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:02:02.423965   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:02:02.552274   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:02:02.700711   27934 docker.go:233] disabling docker service ...
	I1026 01:02:02.700771   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:02:02.718236   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:02:02.732116   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:02:02.868905   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:02:02.980683   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:02:02.994225   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:02:03.012791   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:02:03.012857   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.023082   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:02:03.023153   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.033232   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.045462   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.056259   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:02:03.067151   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.077520   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.096669   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.106891   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:02:03.116392   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:02:03.116458   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:02:03.129779   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:02:03.139745   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:03.248476   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:02:03.335933   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:02:03.336001   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:02:03.341028   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:02:03.341087   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:02:03.344865   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:02:03.384107   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:02:03.384182   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.413095   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.443714   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:02:03.445737   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:02:03.447586   27934 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.62
	I1026 01:02:03.449031   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:03.452447   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.452878   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:03.452917   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.453179   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:02:03.457652   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:03.471067   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:02:03.471351   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:03.471669   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.471714   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.487194   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I1026 01:02:03.487657   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.488105   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.488127   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.488437   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.488638   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:02:03.490095   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:03.490500   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.490536   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.506020   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1026 01:02:03.506418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.506947   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.506976   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.507350   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.507527   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:03.507727   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.180
	I1026 01:02:03.507740   27934 certs.go:194] generating shared ca certs ...
	I1026 01:02:03.507758   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.507883   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:02:03.507924   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:02:03.507933   27934 certs.go:256] generating profile certs ...
	I1026 01:02:03.508003   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:02:03.508028   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0
	I1026 01:02:03.508039   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:02:03.728822   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 ...
	I1026 01:02:03.728854   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0: {Name:mk13b323a89a31df62edb3f93e2caa9ef5c95608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729026   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 ...
	I1026 01:02:03.729038   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0: {Name:mk931eb52f244ae5eac81e077cce00cf1844fe8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729110   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:02:03.729242   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:02:03.729367   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:02:03.729382   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:02:03.729396   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:02:03.729409   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:02:03.729443   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:02:03.729457   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:02:03.729475   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:02:03.729491   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:02:03.749554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:02:03.749647   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:02:03.749686   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:02:03.749696   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:02:03.749718   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:02:03.749740   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:02:03.749762   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:02:03.749801   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:03.749827   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:03.749842   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:02:03.749854   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:02:03.749890   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:03.752989   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753341   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:03.753364   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753579   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:03.753776   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:03.753920   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:03.754076   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:03.829849   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:02:03.834830   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:02:03.846065   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:02:03.849963   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:02:03.859787   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:02:03.863509   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:02:03.873244   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:02:03.876871   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:02:03.892364   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:02:03.896520   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:02:03.907397   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:02:03.911631   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:02:03.924039   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:02:03.948397   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:02:03.971545   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:02:03.994742   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:02:04.019083   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1026 01:02:04.043193   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:02:04.066431   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:02:04.089556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:02:04.112422   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:02:04.137648   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:02:04.163111   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:02:04.187974   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:02:04.204419   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:02:04.221407   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:02:04.240446   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:02:04.258125   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:02:04.274506   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:02:04.290927   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:02:04.307309   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:02:04.312975   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:02:04.323808   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328222   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328286   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.334015   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:02:04.344665   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:02:04.355274   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359793   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359862   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.365345   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:02:04.376251   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:02:04.387304   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391720   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391792   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.397948   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:02:04.409356   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:02:04.413518   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:02:04.413569   27934 kubeadm.go:934] updating node {m03 192.168.39.180 8443 v1.31.2 crio true true} ...
	I1026 01:02:04.413666   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:02:04.413689   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:02:04.413726   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:02:04.429892   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:02:04.429970   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:02:04.430030   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.439803   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:02:04.439857   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1026 01:02:04.448847   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1026 01:02:04.448867   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448890   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:04.448924   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:02:04.448969   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.449022   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.453004   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:02:04.453036   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:02:04.477386   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.477445   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:02:04.477465   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:02:04.477513   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.523830   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:02:04.523877   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:02:05.306345   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:02:05.316372   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 01:02:05.333527   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:02:05.350382   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:02:05.366102   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:02:05.369984   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:05.381182   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:05.496759   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:05.512263   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:05.512689   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:05.512740   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:05.531279   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1026 01:02:05.531819   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:05.532966   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:05.532989   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:05.533339   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:05.533529   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:05.533682   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:02:05.533839   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:02:05.533866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:05.536583   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537028   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:05.537057   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537282   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:05.537491   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:05.537676   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:05.537795   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:05.697156   27934 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:05.697206   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443"
	I1026 01:02:29.292626   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443": (23.595390034s)
	I1026 01:02:29.292667   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:02:29.885895   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m03 minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:02:29.997019   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:02:30.136451   27934 start.go:319] duration metric: took 24.602766496s to joinCluster
	I1026 01:02:30.136544   27934 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:30.137000   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:30.137905   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:02:30.139044   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:30.389764   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:30.425326   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:02:30.425691   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:02:30.425759   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:02:30.426058   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:30.426159   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.426170   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.426180   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.426189   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.431156   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:30.926776   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.926801   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.926811   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.926819   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.930142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.426736   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.426771   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.426783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.426791   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.430233   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.926732   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.926744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.926753   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.929704   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:32.426493   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.426514   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.426522   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.426527   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.429836   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:32.430379   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:32.926337   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.926363   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.926376   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.926383   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.929516   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:33.426312   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.426334   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.426342   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.426364   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.430395   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:33.927020   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.927043   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.927050   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.927053   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.930539   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.426611   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.426637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.426649   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.426653   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.429762   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.926585   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.926607   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.926616   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.926622   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.929963   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.930447   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:35.426739   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.426760   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.426786   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.426791   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.429676   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:35.926699   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.926723   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.926731   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.926735   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.930444   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.427052   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.427063   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.427069   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.430961   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.926688   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.926715   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.926726   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.926732   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.930504   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.931114   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:37.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.426568   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.426581   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.426588   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.434793   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:37.926670   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.926699   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.926711   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.926717   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.929364   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:38.427306   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.427327   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.427335   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.427339   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.434499   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:38.926882   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.926902   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.926911   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.926914   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.930831   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:38.931460   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:39.427252   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.427274   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.427283   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.427286   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.430650   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:39.926620   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.926643   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.926654   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.926661   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.930077   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.426363   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.426396   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.426408   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.426414   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.429976   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.926280   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.926310   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.926320   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.926325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.929942   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.426556   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.426563   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.426568   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.430315   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.431209   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:41.926498   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.926522   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.926529   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.926534   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.929738   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.426973   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.427006   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.427013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.427019   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.430244   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.927247   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.927275   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.927283   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.927288   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.930906   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.426731   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.426759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.426768   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.426773   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.430712   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.431301   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:43.926784   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.926823   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.926832   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.926835   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.929957   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.427237   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.427258   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.427266   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.427270   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.430769   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.926731   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.926740   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.926743   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.930247   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.427043   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.427065   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.427074   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.427079   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.430820   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.431387   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:45.927275   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.927296   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.927304   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.927306   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.930627   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.426245   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.426266   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.426274   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.426278   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.429561   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.926352   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.926373   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.926384   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.926390   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.929454   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.426420   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.426462   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.426472   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.426477   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.430019   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.926864   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.926889   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.926900   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.926906   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.929997   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.930569   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:48.426656   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.426693   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.426709   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.426716   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.435417   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:48.436037   27934 node_ready.go:49] node "ha-300623-m03" has status "Ready":"True"
	I1026 01:02:48.436062   27934 node_ready.go:38] duration metric: took 18.009981713s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:48.436077   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:48.436165   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:48.436180   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.436190   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.436203   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.442639   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:02:48.450258   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.450343   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:02:48.450349   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.450356   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.450360   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.454261   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.454872   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.454888   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.454895   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.454900   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.459379   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:48.460137   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.460155   27934 pod_ready.go:82] duration metric: took 9.869467ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460165   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460215   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:02:48.460224   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.460231   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.460233   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.463232   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.463771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.463783   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.463792   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.463797   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.466281   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.466732   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.466748   27934 pod_ready.go:82] duration metric: took 6.577285ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466762   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466818   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:02:48.466826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.466833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.466837   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.469268   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.469931   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.469946   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.469953   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.469957   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.472212   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.472664   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.472682   27934 pod_ready.go:82] duration metric: took 5.914156ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472691   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472750   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:02:48.472759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.472766   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.472770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.475167   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.475777   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:48.475794   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.475802   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.475806   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.478259   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.478687   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.478703   27934 pod_ready.go:82] duration metric: took 6.006167ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.478711   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.627599   27934 request.go:632] Waited for 148.830245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627657   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627667   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.627674   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.627680   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.631663   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.827561   27934 request.go:632] Waited for 195.345637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827630   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.827645   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.827649   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.831042   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.831791   27934 pod_ready.go:93] pod "etcd-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.831815   27934 pod_ready.go:82] duration metric: took 353.094836ms for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.831835   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.027283   27934 request.go:632] Waited for 195.388128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027360   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027365   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.027373   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.027380   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.030439   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.227538   27934 request.go:632] Waited for 196.377694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227614   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227627   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.227643   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.227650   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.230823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.231339   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.231360   27934 pod_ready.go:82] duration metric: took 399.517961ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.231374   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.426746   27934 request.go:632] Waited for 195.299777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426820   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.426833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.426842   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.430033   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.626896   27934 request.go:632] Waited for 196.298512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626964   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626970   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.626977   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.626980   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.630142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.630626   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.630645   27934 pod_ready.go:82] duration metric: took 399.259883ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.630655   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.826666   27934 request.go:632] Waited for 195.934282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826722   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826727   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.826739   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.826744   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.830021   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.027111   27934 request.go:632] Waited for 196.361005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027198   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027210   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.027222   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.027231   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.030533   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.031215   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.031238   27934 pod_ready.go:82] duration metric: took 400.574994ms for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.031268   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.227253   27934 request.go:632] Waited for 195.903041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227309   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227314   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.227321   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.227325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.230415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.427535   27934 request.go:632] Waited for 196.340381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427594   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427602   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.427612   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.427619   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.430823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.431395   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.431413   27934 pod_ready.go:82] duration metric: took 400.135776ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.431426   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.626990   27934 request.go:632] Waited for 195.470744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627069   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627075   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.627082   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.627087   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.630185   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.827370   27934 request.go:632] Waited for 196.34647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827442   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827448   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.827455   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.827461   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.831085   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.831842   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.831859   27934 pod_ready.go:82] duration metric: took 400.426225ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.831869   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.027015   27934 request.go:632] Waited for 195.078027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027084   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027092   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.027099   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.027103   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.031047   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.227422   27934 request.go:632] Waited for 195.619523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227479   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227484   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.227492   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.227495   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.231982   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:51.232544   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.232570   27934 pod_ready.go:82] duration metric: took 400.691296ms for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.232584   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.427652   27934 request.go:632] Waited for 194.988908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427748   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427756   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.427763   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.427769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.431107   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.627383   27934 request.go:632] Waited for 195.646071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627443   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627450   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.627459   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.627465   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.630345   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:51.630913   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.630940   27934 pod_ready.go:82] duration metric: took 398.33791ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.630957   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.826903   27934 request.go:632] Waited for 195.872288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826976   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826981   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.826989   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.826995   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.830596   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.027634   27934 request.go:632] Waited for 196.404478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027720   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027729   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.027740   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.027744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.031724   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.032488   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.032512   27934 pod_ready.go:82] duration metric: took 401.542551ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.032525   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.227636   27934 request.go:632] Waited for 195.035156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.227705   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.227713   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.230866   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.426675   27934 request.go:632] Waited for 195.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426757   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426765   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.426775   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.426782   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.429979   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.430570   27934 pod_ready.go:93] pod "kube-proxy-mv7sf" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.430594   27934 pod_ready.go:82] duration metric: took 398.058369ms for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.430608   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.627616   27934 request.go:632] Waited for 196.938648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.627704   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.627709   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.631135   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.827333   27934 request.go:632] Waited for 195.390365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827388   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827397   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.827404   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.827409   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.830746   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.831581   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.831599   27934 pod_ready.go:82] duration metric: took 400.983275ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.831611   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.026899   27934 request.go:632] Waited for 195.225563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026954   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026959   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.026967   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.026971   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.030270   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.227500   27934 request.go:632] Waited for 196.386112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227559   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227564   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.227572   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.227577   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.231336   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.231867   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.231885   27934 pod_ready.go:82] duration metric: took 400.266151ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.231896   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.426974   27934 request.go:632] Waited for 194.996598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427030   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.427037   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.427041   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.430377   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.626766   27934 request.go:632] Waited for 195.735993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626829   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.626836   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.626840   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.630167   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.630954   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.630975   27934 pod_ready.go:82] duration metric: took 399.071645ms for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.630992   27934 pod_ready.go:39] duration metric: took 5.19490109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:53.631015   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:02:53.631076   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:02:53.646977   27934 api_server.go:72] duration metric: took 23.510394339s to wait for apiserver process to appear ...
	I1026 01:02:53.647007   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:02:53.647030   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:02:53.651895   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:02:53.651966   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:02:53.651972   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.651979   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.651983   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.652674   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:02:53.652802   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:02:53.652821   27934 api_server.go:131] duration metric: took 5.805941ms to wait for apiserver health ...
	I1026 01:02:53.652830   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:02:53.827168   27934 request.go:632] Waited for 174.273301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827222   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827228   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.827235   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.827240   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.834306   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:53.841838   27934 system_pods.go:59] 24 kube-system pods found
	I1026 01:02:53.841872   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:53.841879   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:53.841885   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:53.841891   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:53.841897   27934 system_pods.go:61] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:53.841901   27934 system_pods.go:61] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:53.841906   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:53.841911   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:53.841916   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:53.841921   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:53.841927   27934 system_pods.go:61] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:53.841932   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:53.841938   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:53.841945   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:53.841951   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:53.841959   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:53.841964   27934 system_pods.go:61] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:53.841970   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:53.841976   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:53.841982   27934 system_pods.go:61] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:53.841992   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:53.841998   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:53.842006   27934 system_pods.go:61] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:53.842011   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:53.842020   27934 system_pods.go:74] duration metric: took 189.182306ms to wait for pod list to return data ...
	I1026 01:02:53.842033   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:02:54.027353   27934 request.go:632] Waited for 185.245125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027412   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027420   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.027431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.027441   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.030973   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.031077   27934 default_sa.go:45] found service account: "default"
	I1026 01:02:54.031089   27934 default_sa.go:55] duration metric: took 189.048618ms for default service account to be created ...
	I1026 01:02:54.031098   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:02:54.227423   27934 request.go:632] Waited for 196.255704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227493   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.227507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.227517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.232907   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:02:54.240539   27934 system_pods.go:86] 24 kube-system pods found
	I1026 01:02:54.240565   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:54.240571   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:54.240574   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:54.240578   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:54.240582   27934 system_pods.go:89] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:54.240586   27934 system_pods.go:89] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:54.240589   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:54.240592   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:54.240595   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:54.240599   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:54.240602   27934 system_pods.go:89] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:54.240606   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:54.240609   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:54.240613   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:54.240616   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:54.240620   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:54.240624   27934 system_pods.go:89] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:54.240627   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:54.240632   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:54.240635   27934 system_pods.go:89] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:54.240641   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:54.240644   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:54.240647   27934 system_pods.go:89] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:54.240650   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:54.240656   27934 system_pods.go:126] duration metric: took 209.550822ms to wait for k8s-apps to be running ...
	I1026 01:02:54.240667   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:02:54.240705   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:54.259476   27934 system_svc.go:56] duration metric: took 18.80003ms WaitForService to wait for kubelet
	I1026 01:02:54.259503   27934 kubeadm.go:582] duration metric: took 24.122925603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:02:54.259520   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:02:54.427334   27934 request.go:632] Waited for 167.728559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427409   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427417   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.427430   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.427440   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.431191   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.432324   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432349   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432365   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432369   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432378   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432383   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432391   27934 node_conditions.go:105] duration metric: took 172.867066ms to run NodePressure ...
	I1026 01:02:54.432404   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:02:54.432431   27934 start.go:255] writing updated cluster config ...
	I1026 01:02:54.432784   27934 ssh_runner.go:195] Run: rm -f paused
	I1026 01:02:54.484591   27934 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 01:02:54.487070   27934 out.go:177] * Done! kubectl is now configured to use "ha-300623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.829271014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904805829247433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d0a7f6d-bc13-4da1-8572-9e9c31b7df56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.830079331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6365f1c3-faf2-4d6c-9988-3770cb45bf23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.830137120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6365f1c3-faf2-4d6c-9988-3770cb45bf23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.834007436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6365f1c3-faf2-4d6c-9988-3770cb45bf23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.874245532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc2189d2-ffee-4e60-9e86-01acbb83f8ee name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.874337194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc2189d2-ffee-4e60-9e86-01acbb83f8ee name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.876908098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d11ded0-52b5-4d7a-9912-9d7dafbe699d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.877596935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904805877571849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d11ded0-52b5-4d7a-9912-9d7dafbe699d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.878300914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68cd6012-31b3-4067-b0a3-e416a8b325d5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.878362523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68cd6012-31b3-4067-b0a3-e416a8b325d5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.878607500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68cd6012-31b3-4067-b0a3-e416a8b325d5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.916591720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64edb3b3-b65d-47c3-af2f-59e4bef47c2b name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.916757562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64edb3b3-b65d-47c3-af2f-59e4bef47c2b name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.918112829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b87a6e3-ed5f-442f-846f-ee540f484726 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.918524597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904805918505594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b87a6e3-ed5f-442f-846f-ee540f484726 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.919066631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8996877c-e566-43e0-a01e-e497168c749c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.919135373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8996877c-e566-43e0-a01e-e497168c749c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.919400889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8996877c-e566-43e0-a01e-e497168c749c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.955004680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a05e9e1-d057-4282-9e7c-7e430bc5d8b6 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.955105712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a05e9e1-d057-4282-9e7c-7e430bc5d8b6 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.956921907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f3f8919-3d30-47ab-b860-791cce9c6121 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.957365781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904805957342353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f3f8919-3d30-47ab-b860-791cce9c6121 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.958005396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0ff3f49-0478-4b0e-ab5b-d912034c3502 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.958074041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0ff3f49-0478-4b0e-ab5b-d912034c3502 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:45 ha-300623 crio[655]: time="2024-10-26 01:06:45.958302144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0ff3f49-0478-4b0e-ab5b-d912034c3502 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85cbf0b8850a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   731eca9181f8b       busybox-7dff88458-x8rtl
	ca2bd9d7fe0a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20e3c054f64b8       coredns-7c65d6cfc9-ntmgc
	56c849c3f6d25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d580ea18268bf       coredns-7c65d6cfc9-qx24f
	862c0633984db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f6635176e0517       storage-provisioner
	d6d0d55128c15       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   cffe8a0cf602c       kindnet-4cqmf
	f7fca08cb5de6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   94078692adcf1       kube-proxy-65rns
	a103c72040168       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   620e95994188b       kube-vip-ha-300623
	47a0b2ec9c50d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   f86f0547d7e3f       kube-controller-manager-ha-300623
	3e321e090fa4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a63bff1c62868       etcd-ha-300623
	3c25e47b58ddc       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9b38c5bcef6f6       kube-scheduler-ha-300623
	3bcea9b84ac37       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   e9bc0343ef669       kube-apiserver-ha-300623
	
	
	==> coredns [56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d] <==
	[INFO] 10.244.0.4:35752 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083964s
	[INFO] 10.244.0.4:46160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070172s
	[INFO] 10.244.2.2:48496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233704s
	[INFO] 10.244.2.2:43326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002692245s
	[INFO] 10.244.1.2:54632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145197s
	[INFO] 10.244.1.2:39137 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866788s
	[INFO] 10.244.1.2:37569 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241474s
	[INFO] 10.244.0.4:42983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170463s
	[INFO] 10.244.0.4:34095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002204796s
	[INFO] 10.244.0.4:47258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001867963s
	[INFO] 10.244.0.4:59491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141493s
	[INFO] 10.244.0.4:57514 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133403s
	[INFO] 10.244.0.4:45585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174758s
	[INFO] 10.244.2.2:57387 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165086s
	[INFO] 10.244.2.2:37898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136051s
	[INFO] 10.244.1.2:45240 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130797s
	[INFO] 10.244.1.2:40585 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000259318s
	[INFO] 10.244.1.2:54189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089088s
	[INFO] 10.244.1.2:56872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108098s
	[INFO] 10.244.0.4:43642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083444s
	[INFO] 10.244.2.2:37138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161058s
	[INFO] 10.244.1.2:45522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237498s
	[INFO] 10.244.1.2:48964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122296s
	[INFO] 10.244.0.4:46128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168182s
	[INFO] 10.244.0.4:35635 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143147s
	
	
	==> coredns [ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758] <==
	[INFO] 10.244.2.2:54963 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004547023s
	[INFO] 10.244.2.2:34531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244595s
	[INFO] 10.244.2.2:44217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000362208s
	[INFO] 10.244.2.2:60780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018037s
	[INFO] 10.244.2.2:60725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000259265s
	[INFO] 10.244.2.2:33992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168214s
	[INFO] 10.244.1.2:48441 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000237097s
	[INFO] 10.244.1.2:50414 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002508011s
	[INFO] 10.244.1.2:36962 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211094s
	[INFO] 10.244.1.2:45147 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163251s
	[INFO] 10.244.1.2:56149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125966s
	[INFO] 10.244.0.4:56735 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092196s
	[INFO] 10.244.0.4:37487 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002015s
	[INFO] 10.244.2.2:53825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125794s
	[INFO] 10.244.2.2:52505 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213989s
	[INFO] 10.244.0.4:37131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125177s
	[INFO] 10.244.0.4:45742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131329s
	[INFO] 10.244.0.4:52634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089226s
	[INFO] 10.244.2.2:58146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286556s
	[INFO] 10.244.2.2:59488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000218728s
	[INFO] 10.244.2.2:51165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028421s
	[INFO] 10.244.1.2:37736 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160474s
	[INFO] 10.244.1.2:60585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000238531s
	[INFO] 10.244.0.4:46233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078598s
	[INFO] 10.244.0.4:39578 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277206s
	
	
	==> describe nodes <==
	Name:               ha-300623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:00:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-300623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92684f32bf5c4a5ea50d57cd59f5b8ee
	  System UUID:                92684f32-bf5c-4a5e-a50d-57cd59f5b8ee
	  Boot ID:                    3d5330c9-a2ef-4296-ab11-4c9bb32f97df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x8rtl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-ntmgc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-qx24f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-300623                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-4cqmf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-300623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-300623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-65rns                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-300623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-300623                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-300623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-300623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-300623 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-300623 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  RegisteredNode           4m11s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	
	
	Name:               ha-300623-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:01:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-300623-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 619e0e81a0ef43a9b2e79bbc4eb9355e
	  System UUID:                619e0e81-a0ef-43a9-b2e7-9bbc4eb9355e
	  Boot ID:                    89b92f6c-664b-4721-8f8c-216a0ad0c2d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qtdcl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-300623-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-g5bkb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-300623-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-300623-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-7hn2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-300623-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-vip-ha-300623-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-300623-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-300623-m02 status is now: NodeNotReady
	
	
	Name:               ha-300623-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:02:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    ha-300623-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97987e99f2594f70b58fe3aa149b6c7c
	  System UUID:                97987e99-f259-4f70-b58f-e3aa149b6c7c
	  Boot ID:                    7e140c77-fbc1-46f9-addb-72cf937d1703
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mbn94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-300623-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-2v827                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-300623-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-300623-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-mv7sf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-300623-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-vip-ha-300623-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-300623-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	
	
	Name:               ha-300623-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-300623-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 505edce099ab4a75b83037ad7ab46771
	  System UUID:                505edce0-99ab-4a75-b830-37ad7ab46771
	  Boot ID:                    896f9280-eb70-46a8-9d85-c3814086494a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsnn6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-4zk2k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-300623-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-300623-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050258] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782226] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951939] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.521399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct26 01:00] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.061621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060766] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.166618] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145628] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268359] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.874441] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.666530] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060776] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.257866] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.091250] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.528305] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.572352] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 01:01] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901] <==
	{"level":"warn","ts":"2024-10-26T01:06:46.195780Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.204821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.208691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.216256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.222919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.229201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.240849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.244177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.250019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.255719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.259264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.262080Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.263930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.265791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.268563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.274493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.280117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.285705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.289022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.291838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.295272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.301684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.302791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.308237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:46.358714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:06:46 up 7 min,  0 users,  load average: 0.19, 0.25, 0.13
	Linux ha-300623 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde] <==
	I1026 01:06:07.184462       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:17.174569       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:17.174737       1 main.go:300] handling current node
	I1026 01:06:17.174803       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:17.174825       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:17.175067       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:17.175100       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:17.175206       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:17.175228       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:27.175173       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:27.175288       1 main.go:300] handling current node
	I1026 01:06:27.175317       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:27.175335       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:27.175551       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:27.175580       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:27.175762       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:27.175795       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:37.177801       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:37.177885       1 main.go:300] handling current node
	I1026 01:06:37.177904       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:37.177911       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:37.178155       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:37.178179       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:37.178289       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:37.178308       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d] <==
	W1026 01:00:17.926981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1026 01:00:17.928181       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 01:00:17.935826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:00:17.947904       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 01:00:18.894624       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 01:00:18.916292       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:00:19.043184       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 01:00:23.502518       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:00:23.580105       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1026 01:03:00.396346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48596: use of closed network connection
	E1026 01:03:00.597696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48608: use of closed network connection
	E1026 01:03:00.779383       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48638: use of closed network connection
	E1026 01:03:00.968960       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48650: use of closed network connection
	E1026 01:03:01.159859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48672: use of closed network connection
	E1026 01:03:01.356945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48682: use of closed network connection
	E1026 01:03:01.529718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1026 01:03:01.709409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60606: use of closed network connection
	E1026 01:03:01.891333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60636: use of closed network connection
	E1026 01:03:02.183836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60668: use of closed network connection
	E1026 01:03:02.371592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60688: use of closed network connection
	E1026 01:03:02.545427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60698: use of closed network connection
	E1026 01:03:02.716320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60708: use of closed network connection
	E1026 01:03:02.895527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60734: use of closed network connection
	E1026 01:03:03.082972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60756: use of closed network connection
	W1026 01:04:27.938129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.180 192.168.39.183]
	
	
	==> kube-controller-manager [47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3] <==
	I1026 01:03:33.037458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.051536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.162489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	E1026 01:03:33.296244       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"ff6c8323-43e2-4224-a2c5-fbee23186204\", ResourceVersion:\"911\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 26, 1, 0, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\",
\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\"
:\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b16180), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\
", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641908), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeCl
aimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641920), EmptyDir:(*v1.EmptyDirVolumeSource)(n
il), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVo
lumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641938), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azur
eFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b161a0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSou
rce)(0xc001b161e0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false,
RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc002a7eba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002879af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002835100), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ove
rhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0029fa100)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002879b40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1026 01:03:33.604085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:35.173961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.911095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.978536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.761108       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-300623-m04"
	I1026 01:03:37.763013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.822795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:43.288569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:52.993775       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:03:52.994235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:53.016162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:55.127200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:03.835355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:47.785209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:04:47.785779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.821461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.859957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.530512ms"
	I1026 01:04:47.860782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.115µs"
	I1026 01:04:50.162222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:52.952538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	
	
	==> kube-proxy [f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 01:00:25.689413       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 01:00:25.723767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1026 01:00:25.723854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 01:00:25.758166       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 01:00:25.758214       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 01:00:25.758247       1 server_linux.go:169] "Using iptables Proxier"
	I1026 01:00:25.760715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 01:00:25.761068       1 server.go:483] "Version info" version="v1.31.2"
	I1026 01:00:25.761102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:00:25.763718       1 config.go:199] "Starting service config controller"
	I1026 01:00:25.763757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 01:00:25.763790       1 config.go:105] "Starting endpoint slice config controller"
	I1026 01:00:25.763796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 01:00:25.764426       1 config.go:328] "Starting node config controller"
	I1026 01:00:25.764461       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 01:00:25.864157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 01:00:25.864237       1 shared_informer.go:320] Caches are synced for service config
	I1026 01:00:25.864661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b] <==
	I1026 01:02:26.440503       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2v827" node="ha-300623-m03"
	E1026 01:02:55.345123       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.345196       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1d2aa5b5-e44c-4423-a263-a19406face68(default/busybox-7dff88458-qtdcl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qtdcl"
	E1026 01:02:55.345218       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" pod="default/busybox-7dff88458-qtdcl"
	I1026 01:02:55.345275       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.394267       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394343       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5(default/busybox-7dff88458-x8rtl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x8rtl"
	E1026 01:02:55.394364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" pod="default/busybox-7dff88458-x8rtl"
	I1026 01:02:55.394386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394962       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:02:55.395010       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dd5257f3-d0ba-4672-9836-da890e32fb0d(default/busybox-7dff88458-mbn94) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-mbn94"
	E1026 01:02:55.395023       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" pod="default/busybox-7dff88458-mbn94"
	I1026 01:02:55.395037       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:03:33.099592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.101341       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e40741c-73a0-41fa-b38f-a59fed42525b(kube-system/kube-proxy-4zk2k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4zk2k"
	E1026 01:03:33.101520       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-4zk2k"
	I1026 01:03:33.101594       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.102404       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.109277       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 952ba5f9-93b1-4543-8b73-3ac1600315fc(kube-system/kindnet-l58kk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l58kk"
	E1026 01:03:33.109487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-l58kk"
	I1026 01:03:33.109689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.136820       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lm6x" node="ha-300623-m04"
	E1026 01:03:33.137312       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-5lm6x"
	E1026 01:03:33.152104       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jhv9k" node="ha-300623-m04"
	E1026 01:03:33.153545       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-jhv9k"
	
	
	==> kubelet <==
	Oct 26 01:05:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:05:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171492    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171604    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173388    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173412    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176311    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176778    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179258    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179567    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181750    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181791    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183203    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183277    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.106419    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185785    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185827    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188435    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188477    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190241    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190296    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.075418153s)
ha_test.go:309: expected profile "ha-300623" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-300623\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-300623\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-300623\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.183\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.62\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.180\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.197\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (1.342414761s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m03_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-300623 node start m02 -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:59:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:59:41.102327   27934 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:41.102422   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102427   27934 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:41.102431   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:41.102629   27934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:41.103175   27934 out.go:352] Setting JSON to false
	I1026 00:59:41.103986   27934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2521,"bootTime":1729901860,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:41.104085   27934 start.go:139] virtualization: kvm guest
	I1026 00:59:41.106060   27934 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:41.107343   27934 notify.go:220] Checking for updates...
	I1026 00:59:41.107361   27934 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:41.108566   27934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:41.109853   27934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:41.111166   27934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.112531   27934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:41.113798   27934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:41.115167   27934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:41.148833   27934 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 00:59:41.150115   27934 start.go:297] selected driver: kvm2
	I1026 00:59:41.150128   27934 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:59:41.150139   27934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:41.150812   27934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.150910   27934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:59:41.165692   27934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:59:41.165750   27934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:59:41.166043   27934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 00:59:41.166082   27934 cni.go:84] Creating CNI manager for ""
	I1026 00:59:41.166138   27934 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1026 00:59:41.166151   27934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:59:41.166210   27934 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:41.166340   27934 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:59:41.168250   27934 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 00:59:41.169625   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:59:41.169671   27934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:59:41.169699   27934 cache.go:56] Caching tarball of preloaded images
	I1026 00:59:41.169771   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 00:59:41.169781   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 00:59:41.170066   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 00:59:41.170083   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json: {Name:mkc18d341848fb714503df8b4bfc42be69331fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 00:59:41.170205   27934 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 00:59:41.170231   27934 start.go:364] duration metric: took 14.614µs to acquireMachinesLock for "ha-300623"
	I1026 00:59:41.170247   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 00:59:41.170298   27934 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 00:59:41.171896   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 00:59:41.172034   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:41.172078   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:41.186522   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I1026 00:59:41.186988   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:41.187517   27934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:41.187539   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:41.187925   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:41.188146   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 00:59:41.188284   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 00:59:41.188436   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 00:59:41.188472   27934 client.go:168] LocalClient.Create starting
	I1026 00:59:41.188506   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 00:59:41.188539   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188554   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188604   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 00:59:41.188622   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 00:59:41.188635   27934 main.go:141] libmachine: Parsing certificate...
	I1026 00:59:41.188652   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 00:59:41.188664   27934 main.go:141] libmachine: (ha-300623) Calling .PreCreateCheck
	I1026 00:59:41.189023   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 00:59:41.189374   27934 main.go:141] libmachine: Creating machine...
	I1026 00:59:41.189386   27934 main.go:141] libmachine: (ha-300623) Calling .Create
	I1026 00:59:41.189526   27934 main.go:141] libmachine: (ha-300623) Creating KVM machine...
	I1026 00:59:41.190651   27934 main.go:141] libmachine: (ha-300623) DBG | found existing default KVM network
	I1026 00:59:41.191301   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.191170   27957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1026 00:59:41.191329   27934 main.go:141] libmachine: (ha-300623) DBG | created network xml: 
	I1026 00:59:41.191339   27934 main.go:141] libmachine: (ha-300623) DBG | <network>
	I1026 00:59:41.191366   27934 main.go:141] libmachine: (ha-300623) DBG |   <name>mk-ha-300623</name>
	I1026 00:59:41.191399   27934 main.go:141] libmachine: (ha-300623) DBG |   <dns enable='no'/>
	I1026 00:59:41.191415   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191424   27934 main.go:141] libmachine: (ha-300623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 00:59:41.191431   27934 main.go:141] libmachine: (ha-300623) DBG |     <dhcp>
	I1026 00:59:41.191438   27934 main.go:141] libmachine: (ha-300623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 00:59:41.191445   27934 main.go:141] libmachine: (ha-300623) DBG |     </dhcp>
	I1026 00:59:41.191450   27934 main.go:141] libmachine: (ha-300623) DBG |   </ip>
	I1026 00:59:41.191457   27934 main.go:141] libmachine: (ha-300623) DBG |   
	I1026 00:59:41.191462   27934 main.go:141] libmachine: (ha-300623) DBG | </network>
	I1026 00:59:41.191489   27934 main.go:141] libmachine: (ha-300623) DBG | 
	I1026 00:59:41.196331   27934 main.go:141] libmachine: (ha-300623) DBG | trying to create private KVM network mk-ha-300623 192.168.39.0/24...
	I1026 00:59:41.258139   27934 main.go:141] libmachine: (ha-300623) DBG | private KVM network mk-ha-300623 192.168.39.0/24 created
	I1026 00:59:41.258172   27934 main.go:141] libmachine: (ha-300623) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.258186   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.258104   27957 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.258203   27934 main.go:141] libmachine: (ha-300623) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:59:41.258226   27934 main.go:141] libmachine: (ha-300623) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 00:59:41.511971   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.511837   27957 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa...
	I1026 00:59:41.679961   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679835   27957 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk...
	I1026 00:59:41.680008   27934 main.go:141] libmachine: (ha-300623) DBG | Writing magic tar header
	I1026 00:59:41.680023   27934 main.go:141] libmachine: (ha-300623) DBG | Writing SSH key tar header
	I1026 00:59:41.680037   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:41.679951   27957 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 ...
	I1026 00:59:41.680109   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623
	I1026 00:59:41.680139   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 00:59:41.680156   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623 (perms=drwx------)
	I1026 00:59:41.680166   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:41.680185   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 00:59:41.680194   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 00:59:41.680209   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home/jenkins
	I1026 00:59:41.680219   27934 main.go:141] libmachine: (ha-300623) DBG | Checking permissions on dir: /home
	I1026 00:59:41.680230   27934 main.go:141] libmachine: (ha-300623) DBG | Skipping /home - not owner
	I1026 00:59:41.680244   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 00:59:41.680257   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 00:59:41.680313   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 00:59:41.680344   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 00:59:41.680359   27934 main.go:141] libmachine: (ha-300623) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 00:59:41.680367   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:41.681340   27934 main.go:141] libmachine: (ha-300623) define libvirt domain using xml: 
	I1026 00:59:41.681362   27934 main.go:141] libmachine: (ha-300623) <domain type='kvm'>
	I1026 00:59:41.681370   27934 main.go:141] libmachine: (ha-300623)   <name>ha-300623</name>
	I1026 00:59:41.681381   27934 main.go:141] libmachine: (ha-300623)   <memory unit='MiB'>2200</memory>
	I1026 00:59:41.681403   27934 main.go:141] libmachine: (ha-300623)   <vcpu>2</vcpu>
	I1026 00:59:41.681438   27934 main.go:141] libmachine: (ha-300623)   <features>
	I1026 00:59:41.681448   27934 main.go:141] libmachine: (ha-300623)     <acpi/>
	I1026 00:59:41.681452   27934 main.go:141] libmachine: (ha-300623)     <apic/>
	I1026 00:59:41.681457   27934 main.go:141] libmachine: (ha-300623)     <pae/>
	I1026 00:59:41.681471   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681479   27934 main.go:141] libmachine: (ha-300623)   </features>
	I1026 00:59:41.681484   27934 main.go:141] libmachine: (ha-300623)   <cpu mode='host-passthrough'>
	I1026 00:59:41.681489   27934 main.go:141] libmachine: (ha-300623)   
	I1026 00:59:41.681494   27934 main.go:141] libmachine: (ha-300623)   </cpu>
	I1026 00:59:41.681500   27934 main.go:141] libmachine: (ha-300623)   <os>
	I1026 00:59:41.681504   27934 main.go:141] libmachine: (ha-300623)     <type>hvm</type>
	I1026 00:59:41.681512   27934 main.go:141] libmachine: (ha-300623)     <boot dev='cdrom'/>
	I1026 00:59:41.681520   27934 main.go:141] libmachine: (ha-300623)     <boot dev='hd'/>
	I1026 00:59:41.681528   27934 main.go:141] libmachine: (ha-300623)     <bootmenu enable='no'/>
	I1026 00:59:41.681532   27934 main.go:141] libmachine: (ha-300623)   </os>
	I1026 00:59:41.681539   27934 main.go:141] libmachine: (ha-300623)   <devices>
	I1026 00:59:41.681544   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='cdrom'>
	I1026 00:59:41.681575   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/boot2docker.iso'/>
	I1026 00:59:41.681594   27934 main.go:141] libmachine: (ha-300623)       <target dev='hdc' bus='scsi'/>
	I1026 00:59:41.681606   27934 main.go:141] libmachine: (ha-300623)       <readonly/>
	I1026 00:59:41.681615   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681625   27934 main.go:141] libmachine: (ha-300623)     <disk type='file' device='disk'>
	I1026 00:59:41.681635   27934 main.go:141] libmachine: (ha-300623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 00:59:41.681651   27934 main.go:141] libmachine: (ha-300623)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/ha-300623.rawdisk'/>
	I1026 00:59:41.681664   27934 main.go:141] libmachine: (ha-300623)       <target dev='hda' bus='virtio'/>
	I1026 00:59:41.681675   27934 main.go:141] libmachine: (ha-300623)     </disk>
	I1026 00:59:41.681686   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681698   27934 main.go:141] libmachine: (ha-300623)       <source network='mk-ha-300623'/>
	I1026 00:59:41.681709   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681719   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681734   27934 main.go:141] libmachine: (ha-300623)     <interface type='network'>
	I1026 00:59:41.681746   27934 main.go:141] libmachine: (ha-300623)       <source network='default'/>
	I1026 00:59:41.681756   27934 main.go:141] libmachine: (ha-300623)       <model type='virtio'/>
	I1026 00:59:41.681773   27934 main.go:141] libmachine: (ha-300623)     </interface>
	I1026 00:59:41.681784   27934 main.go:141] libmachine: (ha-300623)     <serial type='pty'>
	I1026 00:59:41.681794   27934 main.go:141] libmachine: (ha-300623)       <target port='0'/>
	I1026 00:59:41.681803   27934 main.go:141] libmachine: (ha-300623)     </serial>
	I1026 00:59:41.681813   27934 main.go:141] libmachine: (ha-300623)     <console type='pty'>
	I1026 00:59:41.681823   27934 main.go:141] libmachine: (ha-300623)       <target type='serial' port='0'/>
	I1026 00:59:41.681835   27934 main.go:141] libmachine: (ha-300623)     </console>
	I1026 00:59:41.681847   27934 main.go:141] libmachine: (ha-300623)     <rng model='virtio'>
	I1026 00:59:41.681861   27934 main.go:141] libmachine: (ha-300623)       <backend model='random'>/dev/random</backend>
	I1026 00:59:41.681876   27934 main.go:141] libmachine: (ha-300623)     </rng>
	I1026 00:59:41.681884   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681893   27934 main.go:141] libmachine: (ha-300623)     
	I1026 00:59:41.681902   27934 main.go:141] libmachine: (ha-300623)   </devices>
	I1026 00:59:41.681910   27934 main.go:141] libmachine: (ha-300623) </domain>
	I1026 00:59:41.681919   27934 main.go:141] libmachine: (ha-300623) 
	I1026 00:59:41.685794   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:bc:3c:c8 in network default
	I1026 00:59:41.686289   27934 main.go:141] libmachine: (ha-300623) Ensuring networks are active...
	I1026 00:59:41.686312   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:41.686908   27934 main.go:141] libmachine: (ha-300623) Ensuring network default is active
	I1026 00:59:41.687318   27934 main.go:141] libmachine: (ha-300623) Ensuring network mk-ha-300623 is active
	I1026 00:59:41.687714   27934 main.go:141] libmachine: (ha-300623) Getting domain xml...
	I1026 00:59:41.688278   27934 main.go:141] libmachine: (ha-300623) Creating domain...
	I1026 00:59:42.865174   27934 main.go:141] libmachine: (ha-300623) Waiting to get IP...
	I1026 00:59:42.866030   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:42.866436   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:42.866478   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:42.866424   27957 retry.go:31] will retry after 310.395452ms: waiting for machine to come up
	I1026 00:59:43.178911   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.179377   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.179517   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.179326   27957 retry.go:31] will retry after 258.757335ms: waiting for machine to come up
	I1026 00:59:43.439460   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.439855   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.439883   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.439810   27957 retry.go:31] will retry after 476.137443ms: waiting for machine to come up
	I1026 00:59:43.917472   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:43.917875   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:43.917910   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:43.917853   27957 retry.go:31] will retry after 411.866237ms: waiting for machine to come up
	I1026 00:59:44.331261   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.331762   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.331800   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.331724   27957 retry.go:31] will retry after 639.236783ms: waiting for machine to come up
	I1026 00:59:44.972039   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:44.972415   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:44.972443   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:44.972363   27957 retry.go:31] will retry after 943.318782ms: waiting for machine to come up
	I1026 00:59:45.917370   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:45.917808   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:45.917870   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:45.917775   27957 retry.go:31] will retry after 1.007000764s: waiting for machine to come up
	I1026 00:59:46.926545   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:46.926930   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:46.926955   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:46.926890   27957 retry.go:31] will retry after 905.175073ms: waiting for machine to come up
	I1026 00:59:47.834112   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:47.834468   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:47.834505   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:47.834452   27957 retry.go:31] will retry after 1.696390131s: waiting for machine to come up
	I1026 00:59:49.533204   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:49.533596   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:49.533625   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:49.533577   27957 retry.go:31] will retry after 2.087564363s: waiting for machine to come up
	I1026 00:59:51.622505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:51.622952   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:51.623131   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:51.622900   27957 retry.go:31] will retry after 2.813881441s: waiting for machine to come up
	I1026 00:59:54.439730   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:54.440081   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:54.440111   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:54.440045   27957 retry.go:31] will retry after 2.560428672s: waiting for machine to come up
	I1026 00:59:57.002066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 00:59:57.002394   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find current IP address of domain ha-300623 in network mk-ha-300623
	I1026 00:59:57.002424   27934 main.go:141] libmachine: (ha-300623) DBG | I1026 00:59:57.002352   27957 retry.go:31] will retry after 3.377744145s: waiting for machine to come up
	I1026 01:00:00.384015   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384460   27934 main.go:141] libmachine: (ha-300623) Found IP for machine: 192.168.39.183
	I1026 01:00:00.384479   27934 main.go:141] libmachine: (ha-300623) Reserving static IP address...
	I1026 01:00:00.384505   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has current primary IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.384856   27934 main.go:141] libmachine: (ha-300623) DBG | unable to find host DHCP lease matching {name: "ha-300623", mac: "52:54:00:4d:a0:46", ip: "192.168.39.183"} in network mk-ha-300623
	I1026 01:00:00.455221   27934 main.go:141] libmachine: (ha-300623) DBG | Getting to WaitForSSH function...
	I1026 01:00:00.455245   27934 main.go:141] libmachine: (ha-300623) Reserved static IP address: 192.168.39.183
	I1026 01:00:00.455253   27934 main.go:141] libmachine: (ha-300623) Waiting for SSH to be available...
	I1026 01:00:00.457760   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458200   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.458223   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.458402   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH client type: external
	I1026 01:00:00.458428   27934 main.go:141] libmachine: (ha-300623) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa (-rw-------)
	I1026 01:00:00.458460   27934 main.go:141] libmachine: (ha-300623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:00.458475   27934 main.go:141] libmachine: (ha-300623) DBG | About to run SSH command:
	I1026 01:00:00.458487   27934 main.go:141] libmachine: (ha-300623) DBG | exit 0
	I1026 01:00:00.585473   27934 main.go:141] libmachine: (ha-300623) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:00.585717   27934 main.go:141] libmachine: (ha-300623) KVM machine creation complete!
	I1026 01:00:00.586041   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:00.586564   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586735   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:00.586856   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:00.586870   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:00.588144   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:00.588156   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:00.588161   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:00.588166   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.590434   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590800   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.590815   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.590958   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.591118   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.591416   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.591579   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.591799   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.591812   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:00.700544   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:00.700568   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:00.700586   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.703305   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703686   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.703708   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.703827   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.704016   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704163   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.704286   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.704450   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.704607   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.704617   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:00.813937   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:00.814027   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:00.814042   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:00.814078   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814305   27934 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:00:00.814333   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:00.814495   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.817076   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817394   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.817438   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.817578   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.817764   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.817892   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.818015   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.818165   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.818334   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.818344   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:00:00.943069   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:00:00.943097   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:00.946005   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946325   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:00.946354   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:00.946524   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:00.946840   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947004   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:00.947144   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:00.947328   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:00.947549   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:00.947572   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:01.065899   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:01.065958   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:01.066012   27934 buildroot.go:174] setting up certificates
	I1026 01:00:01.066027   27934 provision.go:84] configureAuth start
	I1026 01:00:01.066042   27934 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:00:01.066285   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.069069   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069397   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.069440   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.069574   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.071665   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072025   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.072053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.072211   27934 provision.go:143] copyHostCerts
	I1026 01:00:01.072292   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072346   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:01.072359   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:01.072430   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:01.072514   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:01.072540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:01.072577   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:01.072670   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072703   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:01.072711   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:01.072743   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:01.072808   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:00:01.133729   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:01.133783   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:01.133804   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.136311   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136591   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.136617   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.136770   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.136937   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.137059   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.137192   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.222921   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:01.222983   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:01.245372   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:01.245444   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:00:01.267891   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:01.267957   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:00:01.289667   27934 provision.go:87] duration metric: took 223.628307ms to configureAuth
	I1026 01:00:01.289699   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:01.289880   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:01.289953   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.292672   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.292982   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.293012   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.293184   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.293375   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293624   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.293732   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.293904   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.294111   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.294137   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:01.522070   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:01.522096   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:01.522103   27934 main.go:141] libmachine: (ha-300623) Calling .GetURL
	I1026 01:00:01.523378   27934 main.go:141] libmachine: (ha-300623) DBG | Using libvirt version 6000000
	I1026 01:00:01.525286   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525641   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.525670   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.525803   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:01.525822   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:01.525829   27934 client.go:171] duration metric: took 20.337349207s to LocalClient.Create
	I1026 01:00:01.525853   27934 start.go:167] duration metric: took 20.337416513s to libmachine.API.Create "ha-300623"
	I1026 01:00:01.525867   27934 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:00:01.525878   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:01.525899   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.526150   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:01.526178   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.528275   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528583   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.528614   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.528742   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.528907   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.529035   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.529169   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.615528   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:01.619526   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:01.619547   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:01.619607   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:01.619676   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:01.619685   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:01.619772   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:01.628818   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:01.651055   27934 start.go:296] duration metric: took 125.175871ms for postStartSetup
	I1026 01:00:01.651106   27934 main.go:141] libmachine: (ha-300623) Calling .GetConfigRaw
	I1026 01:00:01.651707   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.654048   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654337   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.654358   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.654637   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:01.654812   27934 start.go:128] duration metric: took 20.484504528s to createHost
	I1026 01:00:01.654833   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.656877   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657252   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.657277   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.657399   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.657609   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657759   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.657866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.657999   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:01.658194   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:00:01.658205   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:01.770028   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904401.731044736
	
	I1026 01:00:01.770051   27934 fix.go:216] guest clock: 1729904401.731044736
	I1026 01:00:01.770074   27934 fix.go:229] Guest: 2024-10-26 01:00:01.731044736 +0000 UTC Remote: 2024-10-26 01:00:01.654822884 +0000 UTC m=+20.590184391 (delta=76.221852ms)
	I1026 01:00:01.770101   27934 fix.go:200] guest clock delta is within tolerance: 76.221852ms
	I1026 01:00:01.770108   27934 start.go:83] releasing machines lock for "ha-300623", held for 20.599868049s
	I1026 01:00:01.770184   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.770452   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:01.772669   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773035   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.773066   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.773320   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773757   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.773942   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:01.774055   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:01.774095   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.774157   27934 ssh_runner.go:195] Run: cat /version.json
	I1026 01:00:01.774180   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:01.776503   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776822   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.776846   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.776862   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777013   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777160   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777266   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:01.777287   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:01.777291   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777476   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:01.777463   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.777588   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:01.777703   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:01.777819   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:01.889672   27934 ssh_runner.go:195] Run: systemctl --version
	I1026 01:00:01.895441   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:02.062750   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:02.068559   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:02.068640   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:02.085755   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:02.085784   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:02.085879   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:02.103715   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:02.116629   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:02.116698   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:02.129921   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:02.143297   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:02.262539   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:02.410776   27934 docker.go:233] disabling docker service ...
	I1026 01:00:02.410852   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:02.425252   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:02.438874   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:02.567343   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:02.692382   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:02.705780   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:02.723128   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:02.723196   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.733126   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:02.733204   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.743104   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.752720   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.762245   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:02.772039   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.781522   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.797499   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:02.807723   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:02.816764   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:02.816838   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:02.830364   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:02.840309   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:02.959488   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:03.048870   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:03.048952   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:03.053750   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:03.053801   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:03.057147   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:03.096489   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:03.096564   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.124313   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:03.153078   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:03.154469   27934 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:00:03.157053   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157290   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:03.157320   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:03.157571   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:03.161502   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:03.173922   27934 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:00:03.174024   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:03.174067   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:03.205502   27934 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 01:00:03.205563   27934 ssh_runner.go:195] Run: which lz4
	I1026 01:00:03.209242   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1026 01:00:03.209334   27934 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:00:03.213268   27934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:00:03.213294   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 01:00:04.450368   27934 crio.go:462] duration metric: took 1.241064009s to copy over tarball
	I1026 01:00:04.450448   27934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:00:06.473538   27934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023056026s)
	I1026 01:00:06.473572   27934 crio.go:469] duration metric: took 2.023171959s to extract the tarball
	I1026 01:00:06.473605   27934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:00:06.509382   27934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:00:06.550351   27934 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:00:06.550371   27934 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:00:06.550379   27934 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:00:06.550479   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:06.550540   27934 ssh_runner.go:195] Run: crio config
	I1026 01:00:06.601899   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:06.601920   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:06.601928   27934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:00:06.601953   27934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:00:06.602065   27934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:00:06.602090   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:06.602134   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:06.618905   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:06.619004   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:06.619054   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:06.628422   27934 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:00:06.628482   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:00:06.637507   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:00:06.653506   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:06.669385   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:00:06.685316   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1026 01:00:06.701298   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:06.704780   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:06.716358   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:06.835294   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:06.851617   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:00:06.851643   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:06.851663   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:06.851825   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:06.851928   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:06.851951   27934 certs.go:256] generating profile certs ...
	I1026 01:00:06.852032   27934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:06.852053   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt with IP's: []
	I1026 01:00:07.025844   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt ...
	I1026 01:00:07.025878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt: {Name:mk0969781384c8eb24d904330417d9f7d1f6988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026073   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key ...
	I1026 01:00:07.026087   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key: {Name:mkbd66f66cfdc11b06ed7ee27efeab2c35691371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.026190   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a
	I1026 01:00:07.026206   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.254]
	I1026 01:00:07.091648   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a ...
	I1026 01:00:07.091676   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a: {Name:mk79ee9c8c68f427992ae46daac972e5a80d39e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091862   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a ...
	I1026 01:00:07.091878   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a: {Name:mk0161ea9da0d9d1941870c52b97be187bff2c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.091976   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:07.092075   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.30b82e6a -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:07.092130   27934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:07.092145   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt with IP's: []
	I1026 01:00:07.288723   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt ...
	I1026 01:00:07.288754   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt: {Name:mka585c80540dcf4447ce80873c4b4204a6ac833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.288941   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key ...
	I1026 01:00:07.288955   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key: {Name:mk2a46d0d0037729eebdc4ee5998eb5ddbae3abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:07.289048   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:07.289071   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:07.289091   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:07.289110   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:07.289128   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:07.289145   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:07.289157   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:07.289174   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:07.289238   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:07.289301   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:07.289321   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:07.289357   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:07.289389   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:07.289437   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:07.289497   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:07.289533   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.289554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.289572   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.290185   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:07.315249   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:07.338589   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:07.361991   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:07.385798   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:00:07.409069   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:00:07.431845   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:07.454880   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:07.477392   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:07.500857   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:07.523684   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:07.546154   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:00:07.562082   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:07.567710   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:07.578511   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582871   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.582924   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:07.588401   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:07.601567   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:07.628525   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634748   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.634819   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:07.643756   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:07.657734   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:07.668305   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672451   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.672508   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:07.677939   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:07.688219   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:07.691924   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:07.691988   27934 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:07.692059   27934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:00:07.692137   27934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:00:07.731345   27934 cri.go:89] found id: ""
	I1026 01:00:07.731417   27934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:00:07.741208   27934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:00:07.750623   27934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:00:07.760311   27934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:00:07.760340   27934 kubeadm.go:157] found existing configuration files:
	
	I1026 01:00:07.760383   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:00:07.769207   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:00:07.769267   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:00:07.778578   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:00:07.787579   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:00:07.787661   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:00:07.797042   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.805955   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:00:07.806016   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:00:07.815274   27934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:00:07.824206   27934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:00:07.824269   27934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:00:07.833410   27934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:00:07.938802   27934 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 01:00:07.938923   27934 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:00:08.028635   27934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:00:08.028791   27934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:00:08.028932   27934 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 01:00:08.038844   27934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:00:08.041881   27934 out.go:235]   - Generating certificates and keys ...
	I1026 01:00:08.042903   27934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:00:08.042973   27934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:00:08.315204   27934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:00:08.725495   27934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:00:08.806960   27934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:00:08.984098   27934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:00:09.149484   27934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:00:09.149653   27934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.309448   27934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:00:09.309592   27934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-300623 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1026 01:00:09.556294   27934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:00:09.712766   27934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:00:10.018193   27934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:00:10.018258   27934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:00:10.257230   27934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:00:10.645833   27934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 01:00:10.887377   27934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:00:11.179208   27934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:00:11.353056   27934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:00:11.353655   27934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:00:11.356992   27934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:00:11.358796   27934 out.go:235]   - Booting up control plane ...
	I1026 01:00:11.358907   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:00:11.358983   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:00:11.359320   27934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:00:11.375691   27934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:00:11.384224   27934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:00:11.384282   27934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:00:11.520735   27934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 01:00:11.520904   27934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 01:00:12.022375   27934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.622573ms
	I1026 01:00:12.022456   27934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 01:00:18.050317   27934 kubeadm.go:310] [api-check] The API server is healthy after 6.027294666s
	I1026 01:00:18.065132   27934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:00:18.091049   27934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:00:18.625277   27934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:00:18.625502   27934 kubeadm.go:310] [mark-control-plane] Marking the node ha-300623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:00:18.641286   27934 kubeadm.go:310] [bootstrap-token] Using token: 0x0agx.12z45ob3hq7so0d8
	I1026 01:00:18.642941   27934 out.go:235]   - Configuring RBAC rules ...
	I1026 01:00:18.643084   27934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:00:18.651507   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:00:18.661575   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:00:18.665545   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:00:18.669512   27934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:00:18.677272   27934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:00:18.691190   27934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:00:18.958591   27934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 01:00:19.464064   27934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 01:00:19.464088   27934 kubeadm.go:310] 
	I1026 01:00:19.464204   27934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 01:00:19.464225   27934 kubeadm.go:310] 
	I1026 01:00:19.464365   27934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 01:00:19.464377   27934 kubeadm.go:310] 
	I1026 01:00:19.464406   27934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 01:00:19.464485   27934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:00:19.464567   27934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:00:19.464579   27934 kubeadm.go:310] 
	I1026 01:00:19.464644   27934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 01:00:19.464655   27934 kubeadm.go:310] 
	I1026 01:00:19.464719   27934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:00:19.464726   27934 kubeadm.go:310] 
	I1026 01:00:19.464814   27934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 01:00:19.464930   27934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:00:19.465024   27934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:00:19.465033   27934 kubeadm.go:310] 
	I1026 01:00:19.465247   27934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:00:19.465347   27934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 01:00:19.465355   27934 kubeadm.go:310] 
	I1026 01:00:19.465464   27934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.465592   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 01:00:19.465626   27934 kubeadm.go:310] 	--control-plane 
	I1026 01:00:19.465634   27934 kubeadm.go:310] 
	I1026 01:00:19.465757   27934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:00:19.465771   27934 kubeadm.go:310] 
	I1026 01:00:19.465887   27934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0x0agx.12z45ob3hq7so0d8 \
	I1026 01:00:19.466042   27934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 01:00:19.466324   27934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:00:19.466354   27934 cni.go:84] Creating CNI manager for ""
	I1026 01:00:19.466370   27934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1026 01:00:19.468090   27934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:00:19.469492   27934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:00:19.474603   27934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 01:00:19.474628   27934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 01:00:19.493103   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:00:19.838794   27934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:00:19.838909   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:19.838923   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623 minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=true
	I1026 01:00:19.860886   27934 ops.go:34] apiserver oom_adj: -16
	I1026 01:00:19.991866   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.492140   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:20.992964   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.492707   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:21.992237   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.491957   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:22.992426   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.492181   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:00:23.615897   27934 kubeadm.go:1113] duration metric: took 3.777077904s to wait for elevateKubeSystemPrivileges
	I1026 01:00:23.615938   27934 kubeadm.go:394] duration metric: took 15.923953549s to StartCluster
	I1026 01:00:23.615966   27934 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.616076   27934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.616984   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:23.617268   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:00:23.617267   27934 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:23.617376   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:00:23.617295   27934 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:00:23.617401   27934 addons.go:69] Setting storage-provisioner=true in profile "ha-300623"
	I1026 01:00:23.617447   27934 addons.go:234] Setting addon storage-provisioner=true in "ha-300623"
	I1026 01:00:23.617472   27934 addons.go:69] Setting default-storageclass=true in profile "ha-300623"
	I1026 01:00:23.617485   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.617498   27934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-300623"
	I1026 01:00:23.617505   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:23.617969   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618010   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.618031   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.618073   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.633825   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I1026 01:00:23.633917   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1026 01:00:23.634401   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.634846   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634864   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.634968   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.634988   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.635198   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635332   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.635386   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.635834   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.635876   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.637603   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:00:23.637812   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:00:23.638218   27934 cert_rotation.go:140] Starting client certificate rotation controller
	I1026 01:00:23.638343   27934 addons.go:234] Setting addon default-storageclass=true in "ha-300623"
	I1026 01:00:23.638387   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:23.638626   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.638653   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.651480   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I1026 01:00:23.651965   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.652480   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.652510   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.652799   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.652991   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.653021   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I1026 01:00:23.654147   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.654693   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.654718   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.654832   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.655239   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.655791   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:23.655841   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:23.656920   27934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:00:23.658814   27934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.658834   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:00:23.658853   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.662101   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662598   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.662632   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.662848   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.663049   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.663200   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.663316   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.671976   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1026 01:00:23.672433   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:23.672925   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:23.672950   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:23.673249   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:23.673483   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:23.675058   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:23.675265   27934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:23.675282   27934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:00:23.675298   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:23.678185   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678589   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:23.678611   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:23.678792   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:23.678957   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:23.679108   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:23.679249   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:23.762178   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 01:00:23.824448   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:00:23.874821   27934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:00:24.116804   27934 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 01:00:24.301862   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301884   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.301919   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.301937   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302185   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302194   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302193   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302200   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302168   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302221   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302229   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302239   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.302246   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.302447   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302464   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302531   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.302526   27934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 01:00:24.302571   27934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 01:00:24.302606   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.302631   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.302680   27934 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1026 01:00:24.302699   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.302706   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.302710   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315108   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:00:24.315658   27934 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1026 01:00:24.315672   27934 round_trippers.go:469] Request Headers:
	I1026 01:00:24.315679   27934 round_trippers.go:473]     Content-Type: application/json
	I1026 01:00:24.315683   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:00:24.315686   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:00:24.318571   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:00:24.318791   27934 main.go:141] libmachine: Making call to close driver server
	I1026 01:00:24.318805   27934 main.go:141] libmachine: (ha-300623) Calling .Close
	I1026 01:00:24.319072   27934 main.go:141] libmachine: Successfully made call to close driver server
	I1026 01:00:24.319089   27934 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 01:00:24.319093   27934 main.go:141] libmachine: (ha-300623) DBG | Closing plugin on server side
	I1026 01:00:24.321441   27934 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 01:00:24.323036   27934 addons.go:510] duration metric: took 705.743688ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 01:00:24.323074   27934 start.go:246] waiting for cluster config update ...
	I1026 01:00:24.323088   27934 start.go:255] writing updated cluster config ...
	I1026 01:00:24.324580   27934 out.go:201] 
	I1026 01:00:24.325800   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:24.325876   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.327345   27934 out.go:177] * Starting "ha-300623-m02" control-plane node in "ha-300623" cluster
	I1026 01:00:24.329009   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:00:24.329028   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:00:24.329124   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:00:24.329138   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:00:24.329209   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:24.329375   27934 start.go:360] acquireMachinesLock for ha-300623-m02: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:00:24.329429   27934 start.go:364] duration metric: took 35.088µs to acquireMachinesLock for "ha-300623-m02"
	I1026 01:00:24.329452   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:24.329544   27934 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1026 01:00:24.330943   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:00:24.331025   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:24.331057   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:24.345495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I1026 01:00:24.346002   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:24.346476   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:24.346491   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:24.346765   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:24.346970   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:24.347113   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:24.347293   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:00:24.347323   27934 client.go:168] LocalClient.Create starting
	I1026 01:00:24.347359   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:00:24.347400   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347421   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347493   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:00:24.347519   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:00:24.347536   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:00:24.347559   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:00:24.347568   27934 main.go:141] libmachine: (ha-300623-m02) Calling .PreCreateCheck
	I1026 01:00:24.347721   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:24.348120   27934 main.go:141] libmachine: Creating machine...
	I1026 01:00:24.348135   27934 main.go:141] libmachine: (ha-300623-m02) Calling .Create
	I1026 01:00:24.348260   27934 main.go:141] libmachine: (ha-300623-m02) Creating KVM machine...
	I1026 01:00:24.349505   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing default KVM network
	I1026 01:00:24.349630   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found existing private KVM network mk-ha-300623
	I1026 01:00:24.349770   27934 main.go:141] libmachine: (ha-300623-m02) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.349806   27934 main.go:141] libmachine: (ha-300623-m02) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:00:24.349877   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.349757   28306 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.349949   27934 main.go:141] libmachine: (ha-300623-m02) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:00:24.581858   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.581729   28306 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa...
	I1026 01:00:24.824457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824338   28306 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk...
	I1026 01:00:24.824488   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing magic tar header
	I1026 01:00:24.824501   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Writing SSH key tar header
	I1026 01:00:24.824514   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:24.824445   28306 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 ...
	I1026 01:00:24.824563   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02
	I1026 01:00:24.824601   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:00:24.824632   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:00:24.824643   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02 (perms=drwx------)
	I1026 01:00:24.824650   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:00:24.824656   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:00:24.824665   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:00:24.824671   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:00:24.824679   27934 main.go:141] libmachine: (ha-300623-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:00:24.824685   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:24.824694   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:00:24.824702   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:00:24.824707   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:00:24.824717   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Checking permissions on dir: /home
	I1026 01:00:24.824748   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Skipping /home - not owner
	I1026 01:00:24.825705   27934 main.go:141] libmachine: (ha-300623-m02) define libvirt domain using xml: 
	I1026 01:00:24.825725   27934 main.go:141] libmachine: (ha-300623-m02) <domain type='kvm'>
	I1026 01:00:24.825740   27934 main.go:141] libmachine: (ha-300623-m02)   <name>ha-300623-m02</name>
	I1026 01:00:24.825751   27934 main.go:141] libmachine: (ha-300623-m02)   <memory unit='MiB'>2200</memory>
	I1026 01:00:24.825760   27934 main.go:141] libmachine: (ha-300623-m02)   <vcpu>2</vcpu>
	I1026 01:00:24.825769   27934 main.go:141] libmachine: (ha-300623-m02)   <features>
	I1026 01:00:24.825777   27934 main.go:141] libmachine: (ha-300623-m02)     <acpi/>
	I1026 01:00:24.825786   27934 main.go:141] libmachine: (ha-300623-m02)     <apic/>
	I1026 01:00:24.825807   27934 main.go:141] libmachine: (ha-300623-m02)     <pae/>
	I1026 01:00:24.825825   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.825837   27934 main.go:141] libmachine: (ha-300623-m02)   </features>
	I1026 01:00:24.825845   27934 main.go:141] libmachine: (ha-300623-m02)   <cpu mode='host-passthrough'>
	I1026 01:00:24.825850   27934 main.go:141] libmachine: (ha-300623-m02)   
	I1026 01:00:24.825856   27934 main.go:141] libmachine: (ha-300623-m02)   </cpu>
	I1026 01:00:24.825861   27934 main.go:141] libmachine: (ha-300623-m02)   <os>
	I1026 01:00:24.825868   27934 main.go:141] libmachine: (ha-300623-m02)     <type>hvm</type>
	I1026 01:00:24.825873   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='cdrom'/>
	I1026 01:00:24.825880   27934 main.go:141] libmachine: (ha-300623-m02)     <boot dev='hd'/>
	I1026 01:00:24.825888   27934 main.go:141] libmachine: (ha-300623-m02)     <bootmenu enable='no'/>
	I1026 01:00:24.825901   27934 main.go:141] libmachine: (ha-300623-m02)   </os>
	I1026 01:00:24.825911   27934 main.go:141] libmachine: (ha-300623-m02)   <devices>
	I1026 01:00:24.825922   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='cdrom'>
	I1026 01:00:24.825934   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/boot2docker.iso'/>
	I1026 01:00:24.825942   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hdc' bus='scsi'/>
	I1026 01:00:24.825947   27934 main.go:141] libmachine: (ha-300623-m02)       <readonly/>
	I1026 01:00:24.825955   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.825960   27934 main.go:141] libmachine: (ha-300623-m02)     <disk type='file' device='disk'>
	I1026 01:00:24.825967   27934 main.go:141] libmachine: (ha-300623-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:00:24.825975   27934 main.go:141] libmachine: (ha-300623-m02)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/ha-300623-m02.rawdisk'/>
	I1026 01:00:24.825984   27934 main.go:141] libmachine: (ha-300623-m02)       <target dev='hda' bus='virtio'/>
	I1026 01:00:24.825991   27934 main.go:141] libmachine: (ha-300623-m02)     </disk>
	I1026 01:00:24.826012   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826033   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='mk-ha-300623'/>
	I1026 01:00:24.826045   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826054   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826063   27934 main.go:141] libmachine: (ha-300623-m02)     <interface type='network'>
	I1026 01:00:24.826074   27934 main.go:141] libmachine: (ha-300623-m02)       <source network='default'/>
	I1026 01:00:24.826082   27934 main.go:141] libmachine: (ha-300623-m02)       <model type='virtio'/>
	I1026 01:00:24.826091   27934 main.go:141] libmachine: (ha-300623-m02)     </interface>
	I1026 01:00:24.826098   27934 main.go:141] libmachine: (ha-300623-m02)     <serial type='pty'>
	I1026 01:00:24.826107   27934 main.go:141] libmachine: (ha-300623-m02)       <target port='0'/>
	I1026 01:00:24.826112   27934 main.go:141] libmachine: (ha-300623-m02)     </serial>
	I1026 01:00:24.826119   27934 main.go:141] libmachine: (ha-300623-m02)     <console type='pty'>
	I1026 01:00:24.826136   27934 main.go:141] libmachine: (ha-300623-m02)       <target type='serial' port='0'/>
	I1026 01:00:24.826153   27934 main.go:141] libmachine: (ha-300623-m02)     </console>
	I1026 01:00:24.826166   27934 main.go:141] libmachine: (ha-300623-m02)     <rng model='virtio'>
	I1026 01:00:24.826178   27934 main.go:141] libmachine: (ha-300623-m02)       <backend model='random'>/dev/random</backend>
	I1026 01:00:24.826187   27934 main.go:141] libmachine: (ha-300623-m02)     </rng>
	I1026 01:00:24.826194   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826201   27934 main.go:141] libmachine: (ha-300623-m02)     
	I1026 01:00:24.826210   27934 main.go:141] libmachine: (ha-300623-m02)   </devices>
	I1026 01:00:24.826218   27934 main.go:141] libmachine: (ha-300623-m02) </domain>
	I1026 01:00:24.826230   27934 main.go:141] libmachine: (ha-300623-m02) 
	I1026 01:00:24.834328   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:19:9b:85 in network default
	I1026 01:00:24.834898   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring networks are active...
	I1026 01:00:24.834921   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:24.835679   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network default is active
	I1026 01:00:24.836033   27934 main.go:141] libmachine: (ha-300623-m02) Ensuring network mk-ha-300623 is active
	I1026 01:00:24.836422   27934 main.go:141] libmachine: (ha-300623-m02) Getting domain xml...
	I1026 01:00:24.837184   27934 main.go:141] libmachine: (ha-300623-m02) Creating domain...
	I1026 01:00:26.123801   27934 main.go:141] libmachine: (ha-300623-m02) Waiting to get IP...
	I1026 01:00:26.124786   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.125171   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.125213   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.125161   28306 retry.go:31] will retry after 239.473798ms: waiting for machine to come up
	I1026 01:00:26.366497   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.367035   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.367063   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.366991   28306 retry.go:31] will retry after 247.775109ms: waiting for machine to come up
	I1026 01:00:26.616299   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.616749   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.616770   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.616730   28306 retry.go:31] will retry after 304.793231ms: waiting for machine to come up
	I1026 01:00:26.923149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:26.923677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:26.923696   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:26.923618   28306 retry.go:31] will retry after 501.966284ms: waiting for machine to come up
	I1026 01:00:27.427149   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.427595   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.427620   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.427557   28306 retry.go:31] will retry after 462.793286ms: waiting for machine to come up
	I1026 01:00:27.892113   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:27.892649   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:27.892674   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:27.892601   28306 retry.go:31] will retry after 627.280628ms: waiting for machine to come up
	I1026 01:00:28.521634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:28.522118   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:28.522154   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:28.522059   28306 retry.go:31] will retry after 1.043043357s: waiting for machine to come up
	I1026 01:00:29.566267   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:29.566670   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:29.566697   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:29.566641   28306 retry.go:31] will retry after 925.497125ms: waiting for machine to come up
	I1026 01:00:30.493367   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:30.493801   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:30.493826   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:30.493760   28306 retry.go:31] will retry after 1.604522192s: waiting for machine to come up
	I1026 01:00:32.100432   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:32.100961   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:32.100982   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:32.100919   28306 retry.go:31] will retry after 2.197958234s: waiting for machine to come up
	I1026 01:00:34.301338   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:34.301864   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:34.301891   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:34.301813   28306 retry.go:31] will retry after 1.917554174s: waiting for machine to come up
	I1026 01:00:36.221440   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:36.221869   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:36.221888   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:36.221830   28306 retry.go:31] will retry after 3.272341592s: waiting for machine to come up
	I1026 01:00:39.496057   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:39.496525   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:39.496555   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:39.496473   28306 retry.go:31] will retry after 3.688097346s: waiting for machine to come up
	I1026 01:00:43.186914   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:43.187251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find current IP address of domain ha-300623-m02 in network mk-ha-300623
	I1026 01:00:43.187284   27934 main.go:141] libmachine: (ha-300623-m02) DBG | I1026 01:00:43.187241   28306 retry.go:31] will retry after 5.370855346s: waiting for machine to come up
	I1026 01:00:48.563319   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563799   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.563826   27934 main.go:141] libmachine: (ha-300623-m02) Found IP for machine: 192.168.39.62
	I1026 01:00:48.563869   27934 main.go:141] libmachine: (ha-300623-m02) Reserving static IP address...
	I1026 01:00:48.564263   27934 main.go:141] libmachine: (ha-300623-m02) DBG | unable to find host DHCP lease matching {name: "ha-300623-m02", mac: "52:54:00:eb:f2:95", ip: "192.168.39.62"} in network mk-ha-300623
	I1026 01:00:48.642625   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Getting to WaitForSSH function...
	I1026 01:00:48.642658   27934 main.go:141] libmachine: (ha-300623-m02) Reserved static IP address: 192.168.39.62
	I1026 01:00:48.642673   27934 main.go:141] libmachine: (ha-300623-m02) Waiting for SSH to be available...
	I1026 01:00:48.645214   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645726   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.645751   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.645908   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH client type: external
	I1026 01:00:48.645957   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa (-rw-------)
	I1026 01:00:48.645990   27934 main.go:141] libmachine: (ha-300623-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:00:48.646004   27934 main.go:141] libmachine: (ha-300623-m02) DBG | About to run SSH command:
	I1026 01:00:48.646022   27934 main.go:141] libmachine: (ha-300623-m02) DBG | exit 0
	I1026 01:00:48.773437   27934 main.go:141] libmachine: (ha-300623-m02) DBG | SSH cmd err, output: <nil>: 
	I1026 01:00:48.773671   27934 main.go:141] libmachine: (ha-300623-m02) KVM machine creation complete!
	I1026 01:00:48.773985   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:48.774531   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774718   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:48.774839   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:00:48.774863   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetState
	I1026 01:00:48.776153   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:00:48.776168   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:00:48.776176   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:00:48.776184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.778481   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778857   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.778884   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.778991   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.779164   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779300   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.779402   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.779538   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.779788   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.779807   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:00:48.896727   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:48.896751   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:00:48.896762   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:48.899398   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899741   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:48.899779   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:48.899885   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:48.900047   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900184   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:48.900289   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:48.900414   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:48.900617   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:48.900631   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:00:49.017846   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:00:49.017965   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:00:49.017981   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:00:49.017993   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018219   27934 buildroot.go:166] provisioning hostname "ha-300623-m02"
	I1026 01:00:49.018266   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.018441   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.021311   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022133   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.022168   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.022362   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.022542   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022691   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.022833   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.022971   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.023157   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.023181   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m02 && echo "ha-300623-m02" | sudo tee /etc/hostname
	I1026 01:00:49.154863   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m02
	
	I1026 01:00:49.154891   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.157409   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.157924   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.157965   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.158127   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.158313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158463   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.158583   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.158721   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.158874   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.158890   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:00:49.281279   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:00:49.281312   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:00:49.281349   27934 buildroot.go:174] setting up certificates
	I1026 01:00:49.281361   27934 provision.go:84] configureAuth start
	I1026 01:00:49.281370   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetMachineName
	I1026 01:00:49.281641   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.284261   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.284660   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.284785   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.286954   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287298   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.287326   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.287470   27934 provision.go:143] copyHostCerts
	I1026 01:00:49.287501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287544   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:00:49.287555   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:00:49.287640   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:00:49.287745   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287775   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:00:49.287788   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:00:49.287835   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:00:49.287908   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287934   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:00:49.287941   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:00:49.287990   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:00:49.288059   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m02 san=[127.0.0.1 192.168.39.62 ha-300623-m02 localhost minikube]
	I1026 01:00:49.407467   27934 provision.go:177] copyRemoteCerts
	I1026 01:00:49.407520   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:00:49.407552   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.410082   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410436   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.410457   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.410696   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.410880   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.411041   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.411166   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.495389   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:00:49.495471   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:00:49.520501   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:00:49.520571   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:00:49.544170   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:00:49.544266   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:00:49.567939   27934 provision.go:87] duration metric: took 286.565797ms to configureAuth
	I1026 01:00:49.567967   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:00:49.568139   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:49.568207   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.570619   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.570975   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.571000   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.571206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.571396   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571565   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.571706   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.571875   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.572093   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.572115   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:00:49.802107   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:00:49.802136   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:00:49.802143   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetURL
	I1026 01:00:49.803331   27934 main.go:141] libmachine: (ha-300623-m02) DBG | Using libvirt version 6000000
	I1026 01:00:49.805234   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805565   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.805594   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.805716   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:00:49.805729   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:00:49.805746   27934 client.go:171] duration metric: took 25.458413075s to LocalClient.Create
	I1026 01:00:49.805769   27934 start.go:167] duration metric: took 25.45847781s to libmachine.API.Create "ha-300623"
	I1026 01:00:49.805779   27934 start.go:293] postStartSetup for "ha-300623-m02" (driver="kvm2")
	I1026 01:00:49.805791   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:00:49.805808   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:49.806042   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:00:49.806065   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.808068   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808407   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.808434   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.808582   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.808773   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.808963   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.809100   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:49.895521   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:00:49.899409   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:00:49.899435   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:00:49.899514   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:00:49.899627   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:00:49.899639   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:00:49.899762   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:00:49.908849   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:49.931119   27934 start.go:296] duration metric: took 125.326962ms for postStartSetup
	I1026 01:00:49.931168   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetConfigRaw
	I1026 01:00:49.931760   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:49.934318   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934656   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.934677   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.934971   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:00:49.935199   27934 start.go:128] duration metric: took 25.605643958s to createHost
	I1026 01:00:49.935242   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:49.937348   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937642   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:49.937668   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:49.937766   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:49.937916   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938069   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:49.938232   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:49.938387   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:00:49.938577   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1026 01:00:49.938589   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:00:50.054126   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904450.033939767
	
	I1026 01:00:50.054149   27934 fix.go:216] guest clock: 1729904450.033939767
	I1026 01:00:50.054158   27934 fix.go:229] Guest: 2024-10-26 01:00:50.033939767 +0000 UTC Remote: 2024-10-26 01:00:49.935212743 +0000 UTC m=+68.870574304 (delta=98.727024ms)
	I1026 01:00:50.054179   27934 fix.go:200] guest clock delta is within tolerance: 98.727024ms
	I1026 01:00:50.054185   27934 start.go:83] releasing machines lock for "ha-300623-m02", held for 25.72474455s
	I1026 01:00:50.054206   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.054478   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:50.057251   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.057634   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.057666   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.060016   27934 out.go:177] * Found network options:
	I1026 01:00:50.061125   27934 out.go:177]   - NO_PROXY=192.168.39.183
	W1026 01:00:50.062183   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.062255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062824   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.062979   27934 main.go:141] libmachine: (ha-300623-m02) Calling .DriverName
	I1026 01:00:50.063068   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:00:50.063107   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	W1026 01:00:50.063196   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:00:50.063287   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:00:50.063313   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHHostname
	I1026 01:00:50.065732   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.065764   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066105   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066132   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066157   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:50.066172   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:50.066255   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066343   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHPort
	I1026 01:00:50.066466   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066529   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHKeyPath
	I1026 01:00:50.066613   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066757   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.066776   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetSSHUsername
	I1026 01:00:50.066891   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m02/id_rsa Username:docker}
	I1026 01:00:50.300821   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:00:50.306327   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:00:50.306383   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:00:50.322223   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:00:50.322250   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:00:50.322315   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:00:50.338468   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:00:50.351846   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:00:50.351912   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:00:50.366331   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:00:50.380253   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:00:50.506965   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:00:50.668001   27934 docker.go:233] disabling docker service ...
	I1026 01:00:50.668069   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:00:50.682592   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:00:50.695962   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:00:50.824939   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:00:50.938022   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:00:50.952273   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:00:50.970167   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:00:50.970223   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.980486   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:00:50.980547   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:50.991006   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.001215   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.011378   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:00:51.021477   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.031248   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.047066   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:00:51.056669   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:00:51.065644   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:00:51.065713   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:00:51.077591   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:00:51.086612   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:51.190831   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:00:51.272466   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:00:51.272541   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:00:51.277536   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:00:51.277595   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:00:51.281084   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:00:51.316243   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:00:51.316339   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.344007   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:00:51.373231   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:00:51.374904   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:00:51.375971   27934 main.go:141] libmachine: (ha-300623-m02) Calling .GetIP
	I1026 01:00:51.378647   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.378955   27934 main.go:141] libmachine: (ha-300623-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:f2:95", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:00:38 +0000 UTC Type:0 Mac:52:54:00:eb:f2:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-300623-m02 Clientid:01:52:54:00:eb:f2:95}
	I1026 01:00:51.378984   27934 main.go:141] libmachine: (ha-300623-m02) DBG | domain ha-300623-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:eb:f2:95 in network mk-ha-300623
	I1026 01:00:51.379181   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:00:51.383229   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:51.395396   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:00:51.395665   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:00:51.395979   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.396021   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.411495   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I1026 01:00:51.412012   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.412465   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.412492   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.412809   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.413020   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:00:51.414616   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:51.414900   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:51.414943   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:51.429345   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I1026 01:00:51.429857   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:51.430394   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:51.430414   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:51.430718   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:51.430932   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:51.431063   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.62
	I1026 01:00:51.431072   27934 certs.go:194] generating shared ca certs ...
	I1026 01:00:51.431085   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.431231   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:00:51.431297   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:00:51.431310   27934 certs.go:256] generating profile certs ...
	I1026 01:00:51.431379   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:00:51.431404   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab
	I1026 01:00:51.431417   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.254]
	I1026 01:00:51.551653   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab ...
	I1026 01:00:51.551682   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab: {Name:mk7f84df361678f6c264c35c7a54837d967e14ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551843   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab ...
	I1026 01:00:51.551855   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab: {Name:mkd389918e7eb8b1c88d8cee260e577971075312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:00:51.551931   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:00:51.552066   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.7eff9eab -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:00:51.552188   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:00:51.552202   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:00:51.552214   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:00:51.552227   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:00:51.552240   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:00:51.552251   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:00:51.552262   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:00:51.552275   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:00:51.552287   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:00:51.552335   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:00:51.552366   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:00:51.552375   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:00:51.552397   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:00:51.552420   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:00:51.552441   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:00:51.552479   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:00:51.552504   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:51.552517   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:00:51.552529   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:00:51.552559   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:51.555385   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555741   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:51.555776   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:51.555946   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:51.556121   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:51.556266   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:51.556384   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:51.633868   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:00:51.638556   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:00:51.651311   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:00:51.655533   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:00:51.667970   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:00:51.671912   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:00:51.681736   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:00:51.685589   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:00:51.695314   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:00:51.699011   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:00:51.709409   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:00:51.713200   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:00:51.722473   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:00:51.745687   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:00:51.767846   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:00:51.789516   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:00:51.811259   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:00:51.833028   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:00:51.856110   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:00:51.879410   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:00:51.905258   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:00:51.929159   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:00:51.951850   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:00:51.976197   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:00:51.991793   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:00:52.007237   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:00:52.023097   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:00:52.038541   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:00:52.053670   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:00:52.068858   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:00:52.084534   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:00:52.089743   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:00:52.099587   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103529   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.103574   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:00:52.108773   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:00:52.118562   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:00:52.128439   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132388   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.132437   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:00:52.137609   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:00:52.147519   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:00:52.157786   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162186   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.162230   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:00:52.167650   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:00:52.179201   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:00:52.183712   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:00:52.183765   27934 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.31.2 crio true true} ...
	I1026 01:00:52.183873   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:00:52.183908   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:00:52.183953   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:00:52.201496   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:00:52.201565   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:00:52.201625   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.212390   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:00:52.212439   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:00:52.223416   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:00:52.223436   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223483   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:00:52.223536   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1026 01:00:52.223555   27934 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1026 01:00:52.227638   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:00:52.227662   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:00:53.105621   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.105715   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:00:53.110408   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:00:53.110445   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:00:53.233007   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:00:53.274448   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.274566   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:00:53.294441   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:00:53.294487   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:00:53.654866   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:00:53.664222   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 01:00:53.679840   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:00:53.695653   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:00:53.711652   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:00:53.715553   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:00:53.727360   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:00:53.853122   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:00:53.869765   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:00:53.870266   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:00:53.870326   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:00:53.886042   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1026 01:00:53.886641   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:00:53.887219   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:00:53.887243   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:00:53.887613   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:00:53.887814   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:00:53.887974   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:00:53.888094   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:00:53.888116   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:00:53.891569   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892007   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:00:53.892034   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:00:53.892213   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:00:53.892359   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:00:53.892504   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:00:53.892700   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:00:54.059992   27934 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:00:54.060032   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I1026 01:01:15.752497   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l7xlpj.5mal73j6josvpzmx --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (21.692442996s)
	I1026 01:01:15.752534   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:01:16.303360   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m02 minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:01:16.453258   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:01:16.592863   27934 start.go:319] duration metric: took 22.704885851s to joinCluster
	I1026 01:01:16.592954   27934 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:16.593288   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:16.594650   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:01:16.596091   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:01:16.850259   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:01:16.885786   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:01:16.886030   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:01:16.886096   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:01:16.886309   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:16.886394   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:16.886406   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:16.886416   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:16.886421   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:16.901951   27934 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1026 01:01:17.386830   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.386852   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.386859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.386867   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.391117   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:17.886726   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:17.886752   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:17.886769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:17.886774   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:17.891812   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:18.386816   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.386836   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.386844   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.386849   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.389277   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:18.887322   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:18.887345   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:18.887354   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:18.887359   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:18.890950   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:18.891497   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:19.386717   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.386741   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.386752   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.386757   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.389841   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:19.886538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:19.886562   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:19.886569   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:19.886573   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:19.889883   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.386728   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.386753   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.386764   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.386770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.392483   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:20.887438   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:20.887464   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:20.887474   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:20.887480   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:20.891169   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:20.891590   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:21.386734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.386758   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.386770   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.386778   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.389970   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:21.886824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:21.886849   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:21.886859   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:21.886865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:21.891560   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.386652   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.386674   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.386682   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.386686   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.391520   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:22.887482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:22.887508   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:22.887524   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:22.887529   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:22.891155   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:22.891643   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:23.387538   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.387567   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.387578   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.387585   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.390499   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:23.886601   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:23.886627   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:23.886637   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:23.886647   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:23.890054   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.387524   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.387553   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.387564   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.387570   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.390618   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:24.886521   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:24.886550   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:24.886561   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:24.886567   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:24.889985   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.386794   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.386822   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.386831   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.386838   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.390108   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:25.390691   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:25.887094   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:25.887116   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:25.887124   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:25.887128   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:25.890067   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:26.387517   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.387537   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.387545   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.387550   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.391065   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:26.886664   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:26.886688   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:26.886698   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:26.886703   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:26.889958   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.386821   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.386850   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.386860   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.386865   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.389901   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.886863   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:27.886892   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:27.886901   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:27.886904   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:27.890223   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:27.890712   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:28.387256   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.387286   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.387297   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.387304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.391313   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:28.887398   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:28.887423   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:28.887431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:28.887435   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:28.891415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.387299   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.387320   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.387328   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.387333   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.394125   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:01:29.886896   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:29.886918   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:29.886926   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:29.886928   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:29.890460   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:29.891101   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:30.386473   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.386494   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.386505   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.386512   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.389574   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:30.886604   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:30.886631   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:30.886640   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:30.886644   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:30.890190   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.386924   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.386949   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.386959   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.386966   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.399297   27934 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1026 01:01:31.887213   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:31.887236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:31.887243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:31.887250   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:31.890605   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:31.891200   27934 node_ready.go:53] node "ha-300623-m02" has status "Ready":"False"
	I1026 01:01:32.386487   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.386513   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.386523   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.386530   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.389962   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:32.886975   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:32.887003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:32.887016   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:32.887021   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:32.890088   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.386916   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.386938   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.386946   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.386950   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.390776   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.886708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.886731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.886742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.886747   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.890420   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.890962   27934 node_ready.go:49] node "ha-300623-m02" has status "Ready":"True"
	I1026 01:01:33.890985   27934 node_ready.go:38] duration metric: took 17.004659759s for node "ha-300623-m02" to be "Ready" ...
	I1026 01:01:33.890996   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:33.891090   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:33.891103   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.891113   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.891118   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.895593   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:33.901510   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.901584   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:01:33.901593   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.901599   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.901603   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.904838   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.905632   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.905646   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.905653   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.905662   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.908670   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.909108   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.909125   27934 pod_ready.go:82] duration metric: took 7.593244ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909134   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.909228   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:01:33.909236   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.909243   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.909246   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.911623   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.912324   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.912342   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.912351   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.912356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.914836   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.915526   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.915582   27934 pod_ready.go:82] duration metric: took 6.44095ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915619   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.915708   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:01:33.915720   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.915730   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.915737   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.918774   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:33.919308   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:33.919323   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.919332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.919337   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.921541   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.921916   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.921932   27934 pod_ready.go:82] duration metric: took 6.293574ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921944   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.921993   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:01:33.922003   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.922013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.922020   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.924042   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:33.924574   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:33.924592   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:33.924620   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:33.924630   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:33.926627   27934 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1026 01:01:33.927009   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:33.927026   27934 pod_ready.go:82] duration metric: took 5.07473ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:33.927043   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.087429   27934 request.go:632] Waited for 160.309698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087488   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:01:34.087496   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.087507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.087517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.093218   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:34.287260   27934 request.go:632] Waited for 193.380175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287335   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:34.287346   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.287356   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.287367   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.290680   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.291257   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.291280   27934 pod_ready.go:82] duration metric: took 364.229033ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.291293   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.487643   27934 request.go:632] Waited for 196.274187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487743   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:01:34.487757   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.487769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.487776   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.490314   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:34.687266   27934 request.go:632] Waited for 196.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687319   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:34.687325   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.687332   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.687336   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.690681   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:34.691098   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:34.691116   27934 pod_ready.go:82] duration metric: took 399.816191ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.691125   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:34.887235   27934 request.go:632] Waited for 196.048043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887286   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:01:34.887292   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:34.887299   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:34.887304   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:34.890298   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.087251   27934 request.go:632] Waited for 196.393455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087304   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.087311   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.087320   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.087327   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.096042   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:01:35.096481   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.096497   27934 pod_ready.go:82] duration metric: took 405.365113ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.096507   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.287575   27934 request.go:632] Waited for 190.95439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287635   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:01:35.287641   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.287656   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.287664   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.290956   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.486850   27934 request.go:632] Waited for 195.295178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486901   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:35.486907   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.486914   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.486918   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.489791   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:35.490490   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.490509   27934 pod_ready.go:82] duration metric: took 393.992807ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.490519   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.687677   27934 request.go:632] Waited for 197.085878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687734   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:01:35.687739   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.687747   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.687751   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.690861   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.886824   27934 request.go:632] Waited for 195.303807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886902   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:35.886908   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:35.886915   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:35.886919   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:35.890003   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:35.890588   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:35.890610   27934 pod_ready.go:82] duration metric: took 400.083533ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:35.890620   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.087724   27934 request.go:632] Waited for 197.035019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087799   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:01:36.087807   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.087817   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.087823   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.090987   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.287060   27934 request.go:632] Waited for 195.34906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287112   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:36.287118   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.287126   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.287130   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.290355   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:36.290978   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.291000   27934 pod_ready.go:82] duration metric: took 400.372479ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.291014   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.486971   27934 request.go:632] Waited for 195.883358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487050   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:01:36.487059   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.487068   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.487073   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.491124   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:36.686937   27934 request.go:632] Waited for 195.292838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686992   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:01:36.686998   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.687005   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.687009   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.689912   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:01:36.690462   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:36.690479   27934 pod_ready.go:82] duration metric: took 399.458178ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.690490   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:36.887645   27934 request.go:632] Waited for 197.093805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887721   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:01:36.887731   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:36.887742   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:36.887752   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:36.892972   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:01:37.086834   27934 request.go:632] Waited for 193.310036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086917   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:01:37.086924   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.086935   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.086940   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.091462   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.091914   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:01:37.091933   27934 pod_ready.go:82] duration metric: took 401.437262ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:01:37.091944   27934 pod_ready.go:39] duration metric: took 3.20092896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:01:37.091963   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:01:37.092013   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:01:37.107184   27934 api_server.go:72] duration metric: took 20.514182215s to wait for apiserver process to appear ...
	I1026 01:01:37.107232   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:01:37.107251   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:01:37.112416   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:01:37.112504   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:01:37.112517   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.112528   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.112539   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.113540   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:01:37.113668   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:01:37.113698   27934 api_server.go:131] duration metric: took 6.458284ms to wait for apiserver health ...
	I1026 01:01:37.113710   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:01:37.287117   27934 request.go:632] Waited for 173.325695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287206   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.287218   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.287229   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.287237   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.291660   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:01:37.296191   27934 system_pods.go:59] 17 kube-system pods found
	I1026 01:01:37.296219   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.296224   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.296228   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.296232   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.296235   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.296238   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.296241   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.296244   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.296248   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.296251   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.296254   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.296257   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.296260   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.296263   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.296266   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.296269   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.296272   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.296277   27934 system_pods.go:74] duration metric: took 182.559653ms to wait for pod list to return data ...
	I1026 01:01:37.296287   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:01:37.487718   27934 request.go:632] Waited for 191.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:01:37.487776   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.487783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.487787   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.491586   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.491857   27934 default_sa.go:45] found service account: "default"
	I1026 01:01:37.491878   27934 default_sa.go:55] duration metric: took 195.585476ms for default service account to be created ...
	I1026 01:01:37.491887   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:01:37.687316   27934 request.go:632] Waited for 195.344627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687371   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:01:37.687376   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.687383   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.687387   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.691369   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.696949   27934 system_pods.go:86] 17 kube-system pods found
	I1026 01:01:37.696973   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:01:37.696979   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:01:37.696983   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:01:37.696988   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:01:37.696991   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:01:37.696995   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:01:37.696999   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:01:37.697003   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:01:37.697006   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:01:37.697010   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:01:37.697014   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:01:37.697018   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:01:37.697021   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:01:37.697028   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:01:37.697031   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:01:37.697034   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:01:37.697036   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:01:37.697042   27934 system_pods.go:126] duration metric: took 205.150542ms to wait for k8s-apps to be running ...
	I1026 01:01:37.697052   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:01:37.697091   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:01:37.712370   27934 system_svc.go:56] duration metric: took 15.306195ms WaitForService to wait for kubelet
	I1026 01:01:37.712402   27934 kubeadm.go:582] duration metric: took 21.119406025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:01:37.712420   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:01:37.886735   27934 request.go:632] Waited for 174.248578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886856   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:01:37.886868   27934 round_trippers.go:469] Request Headers:
	I1026 01:01:37.886878   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:01:37.886887   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:01:37.890795   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:01:37.891473   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891497   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891509   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:01:37.891513   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:01:37.891517   27934 node_conditions.go:105] duration metric: took 179.092926ms to run NodePressure ...
	I1026 01:01:37.891528   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:01:37.891553   27934 start.go:255] writing updated cluster config ...
	I1026 01:01:37.893974   27934 out.go:201] 
	I1026 01:01:37.895579   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:01:37.895693   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.897785   27934 out.go:177] * Starting "ha-300623-m03" control-plane node in "ha-300623" cluster
	I1026 01:01:37.898981   27934 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:01:37.899006   27934 cache.go:56] Caching tarball of preloaded images
	I1026 01:01:37.899114   27934 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:01:37.899125   27934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:01:37.899210   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:01:37.900601   27934 start.go:360] acquireMachinesLock for ha-300623-m03: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:01:37.900662   27934 start.go:364] duration metric: took 37.924µs to acquireMachinesLock for "ha-300623-m03"
	I1026 01:01:37.900681   27934 start.go:93] Provisioning new machine with config: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:01:37.900777   27934 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1026 01:01:37.902482   27934 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:01:37.902577   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:01:37.902616   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:01:37.917489   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1026 01:01:37.918010   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:01:37.918524   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:01:37.918546   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:01:37.918854   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:01:37.919023   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:01:37.919164   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:01:37.919300   27934 start.go:159] libmachine.API.Create for "ha-300623" (driver="kvm2")
	I1026 01:01:37.919332   27934 client.go:168] LocalClient.Create starting
	I1026 01:01:37.919365   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:01:37.919401   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919415   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919461   27934 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:01:37.919481   27934 main.go:141] libmachine: Decoding PEM data...
	I1026 01:01:37.919492   27934 main.go:141] libmachine: Parsing certificate...
	I1026 01:01:37.919511   27934 main.go:141] libmachine: Running pre-create checks...
	I1026 01:01:37.919519   27934 main.go:141] libmachine: (ha-300623-m03) Calling .PreCreateCheck
	I1026 01:01:37.919665   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:01:37.920059   27934 main.go:141] libmachine: Creating machine...
	I1026 01:01:37.920075   27934 main.go:141] libmachine: (ha-300623-m03) Calling .Create
	I1026 01:01:37.920211   27934 main.go:141] libmachine: (ha-300623-m03) Creating KVM machine...
	I1026 01:01:37.921465   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing default KVM network
	I1026 01:01:37.921611   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found existing private KVM network mk-ha-300623
	I1026 01:01:37.921761   27934 main.go:141] libmachine: (ha-300623-m03) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:37.921786   27934 main.go:141] libmachine: (ha-300623-m03) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:01:37.921849   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:37.921742   28699 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:37.921948   27934 main.go:141] libmachine: (ha-300623-m03) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:01:38.168295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.168154   28699 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa...
	I1026 01:01:38.291085   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.290967   28699 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk...
	I1026 01:01:38.291115   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing magic tar header
	I1026 01:01:38.291125   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Writing SSH key tar header
	I1026 01:01:38.291132   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:38.291098   28699 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 ...
	I1026 01:01:38.291249   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03
	I1026 01:01:38.291280   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03 (perms=drwx------)
	I1026 01:01:38.291294   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:01:38.291307   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:01:38.291313   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:01:38.291323   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:01:38.291330   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:01:38.291340   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Checking permissions on dir: /home
	I1026 01:01:38.291363   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:01:38.291374   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Skipping /home - not owner
	I1026 01:01:38.291387   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:01:38.291395   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:01:38.291403   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:01:38.291411   27934 main.go:141] libmachine: (ha-300623-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:01:38.291417   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:38.292244   27934 main.go:141] libmachine: (ha-300623-m03) define libvirt domain using xml: 
	I1026 01:01:38.292268   27934 main.go:141] libmachine: (ha-300623-m03) <domain type='kvm'>
	I1026 01:01:38.292276   27934 main.go:141] libmachine: (ha-300623-m03)   <name>ha-300623-m03</name>
	I1026 01:01:38.292281   27934 main.go:141] libmachine: (ha-300623-m03)   <memory unit='MiB'>2200</memory>
	I1026 01:01:38.292286   27934 main.go:141] libmachine: (ha-300623-m03)   <vcpu>2</vcpu>
	I1026 01:01:38.292290   27934 main.go:141] libmachine: (ha-300623-m03)   <features>
	I1026 01:01:38.292296   27934 main.go:141] libmachine: (ha-300623-m03)     <acpi/>
	I1026 01:01:38.292303   27934 main.go:141] libmachine: (ha-300623-m03)     <apic/>
	I1026 01:01:38.292314   27934 main.go:141] libmachine: (ha-300623-m03)     <pae/>
	I1026 01:01:38.292320   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292330   27934 main.go:141] libmachine: (ha-300623-m03)   </features>
	I1026 01:01:38.292336   27934 main.go:141] libmachine: (ha-300623-m03)   <cpu mode='host-passthrough'>
	I1026 01:01:38.292375   27934 main.go:141] libmachine: (ha-300623-m03)   
	I1026 01:01:38.292393   27934 main.go:141] libmachine: (ha-300623-m03)   </cpu>
	I1026 01:01:38.292406   27934 main.go:141] libmachine: (ha-300623-m03)   <os>
	I1026 01:01:38.292421   27934 main.go:141] libmachine: (ha-300623-m03)     <type>hvm</type>
	I1026 01:01:38.292439   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='cdrom'/>
	I1026 01:01:38.292484   27934 main.go:141] libmachine: (ha-300623-m03)     <boot dev='hd'/>
	I1026 01:01:38.292496   27934 main.go:141] libmachine: (ha-300623-m03)     <bootmenu enable='no'/>
	I1026 01:01:38.292505   27934 main.go:141] libmachine: (ha-300623-m03)   </os>
	I1026 01:01:38.292533   27934 main.go:141] libmachine: (ha-300623-m03)   <devices>
	I1026 01:01:38.292552   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='cdrom'>
	I1026 01:01:38.292569   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/boot2docker.iso'/>
	I1026 01:01:38.292579   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hdc' bus='scsi'/>
	I1026 01:01:38.292598   27934 main.go:141] libmachine: (ha-300623-m03)       <readonly/>
	I1026 01:01:38.292607   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292617   27934 main.go:141] libmachine: (ha-300623-m03)     <disk type='file' device='disk'>
	I1026 01:01:38.292641   27934 main.go:141] libmachine: (ha-300623-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:01:38.292657   27934 main.go:141] libmachine: (ha-300623-m03)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/ha-300623-m03.rawdisk'/>
	I1026 01:01:38.292667   27934 main.go:141] libmachine: (ha-300623-m03)       <target dev='hda' bus='virtio'/>
	I1026 01:01:38.292685   27934 main.go:141] libmachine: (ha-300623-m03)     </disk>
	I1026 01:01:38.292699   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292713   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='mk-ha-300623'/>
	I1026 01:01:38.292722   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292731   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292740   27934 main.go:141] libmachine: (ha-300623-m03)     <interface type='network'>
	I1026 01:01:38.292749   27934 main.go:141] libmachine: (ha-300623-m03)       <source network='default'/>
	I1026 01:01:38.292759   27934 main.go:141] libmachine: (ha-300623-m03)       <model type='virtio'/>
	I1026 01:01:38.292790   27934 main.go:141] libmachine: (ha-300623-m03)     </interface>
	I1026 01:01:38.292812   27934 main.go:141] libmachine: (ha-300623-m03)     <serial type='pty'>
	I1026 01:01:38.292821   27934 main.go:141] libmachine: (ha-300623-m03)       <target port='0'/>
	I1026 01:01:38.292825   27934 main.go:141] libmachine: (ha-300623-m03)     </serial>
	I1026 01:01:38.292832   27934 main.go:141] libmachine: (ha-300623-m03)     <console type='pty'>
	I1026 01:01:38.292837   27934 main.go:141] libmachine: (ha-300623-m03)       <target type='serial' port='0'/>
	I1026 01:01:38.292843   27934 main.go:141] libmachine: (ha-300623-m03)     </console>
	I1026 01:01:38.292851   27934 main.go:141] libmachine: (ha-300623-m03)     <rng model='virtio'>
	I1026 01:01:38.292862   27934 main.go:141] libmachine: (ha-300623-m03)       <backend model='random'>/dev/random</backend>
	I1026 01:01:38.292871   27934 main.go:141] libmachine: (ha-300623-m03)     </rng>
	I1026 01:01:38.292879   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292887   27934 main.go:141] libmachine: (ha-300623-m03)     
	I1026 01:01:38.292907   27934 main.go:141] libmachine: (ha-300623-m03)   </devices>
	I1026 01:01:38.292927   27934 main.go:141] libmachine: (ha-300623-m03) </domain>
	I1026 01:01:38.292944   27934 main.go:141] libmachine: (ha-300623-m03) 
	I1026 01:01:38.300030   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:59:6f:46 in network default
	I1026 01:01:38.300611   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring networks are active...
	I1026 01:01:38.300639   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:38.301325   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network default is active
	I1026 01:01:38.301614   27934 main.go:141] libmachine: (ha-300623-m03) Ensuring network mk-ha-300623 is active
	I1026 01:01:38.301965   27934 main.go:141] libmachine: (ha-300623-m03) Getting domain xml...
	I1026 01:01:38.302564   27934 main.go:141] libmachine: (ha-300623-m03) Creating domain...
	I1026 01:01:39.541523   27934 main.go:141] libmachine: (ha-300623-m03) Waiting to get IP...
	I1026 01:01:39.542453   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.542916   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.542942   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.542887   28699 retry.go:31] will retry after 281.419322ms: waiting for machine to come up
	I1026 01:01:39.826321   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:39.826750   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:39.826778   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:39.826737   28699 retry.go:31] will retry after 326.383367ms: waiting for machine to come up
	I1026 01:01:40.155076   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.155490   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.155515   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.155448   28699 retry.go:31] will retry after 321.43703ms: waiting for machine to come up
	I1026 01:01:40.479066   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:40.479512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:40.479541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:40.479464   28699 retry.go:31] will retry after 585.906236ms: waiting for machine to come up
	I1026 01:01:41.068220   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.068712   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.068740   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.068671   28699 retry.go:31] will retry after 528.538636ms: waiting for machine to come up
	I1026 01:01:41.598430   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:41.599018   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:41.599040   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:41.598979   28699 retry.go:31] will retry after 646.897359ms: waiting for machine to come up
	I1026 01:01:42.247537   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:42.247952   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:42.247977   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:42.247889   28699 retry.go:31] will retry after 982.424553ms: waiting for machine to come up
	I1026 01:01:43.231997   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:43.232498   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:43.232526   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:43.232426   28699 retry.go:31] will retry after 920.160573ms: waiting for machine to come up
	I1026 01:01:44.154517   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:44.155015   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:44.155041   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:44.154974   28699 retry.go:31] will retry after 1.233732499s: waiting for machine to come up
	I1026 01:01:45.390175   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:45.390649   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:45.390676   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:45.390595   28699 retry.go:31] will retry after 2.305424014s: waiting for machine to come up
	I1026 01:01:47.698485   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:47.698913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:47.698936   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:47.698861   28699 retry.go:31] will retry after 2.109217289s: waiting for machine to come up
	I1026 01:01:49.810556   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:49.811065   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:49.811095   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:49.811021   28699 retry.go:31] will retry after 3.235213993s: waiting for machine to come up
	I1026 01:01:53.047405   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:53.047859   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:53.047896   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:53.047798   28699 retry.go:31] will retry after 2.928776248s: waiting for machine to come up
	I1026 01:01:55.979004   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:01:55.979474   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find current IP address of domain ha-300623-m03 in network mk-ha-300623
	I1026 01:01:55.979500   27934 main.go:141] libmachine: (ha-300623-m03) DBG | I1026 01:01:55.979422   28699 retry.go:31] will retry after 4.662153221s: waiting for machine to come up
	I1026 01:02:00.643538   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644004   27934 main.go:141] libmachine: (ha-300623-m03) Found IP for machine: 192.168.39.180
	I1026 01:02:00.644032   27934 main.go:141] libmachine: (ha-300623-m03) Reserving static IP address...
	I1026 01:02:00.644046   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has current primary IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.644407   27934 main.go:141] libmachine: (ha-300623-m03) DBG | unable to find host DHCP lease matching {name: "ha-300623-m03", mac: "52:54:00:c1:38:db", ip: "192.168.39.180"} in network mk-ha-300623
	I1026 01:02:00.720512   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Getting to WaitForSSH function...
	I1026 01:02:00.720543   27934 main.go:141] libmachine: (ha-300623-m03) Reserved static IP address: 192.168.39.180
	I1026 01:02:00.720555   27934 main.go:141] libmachine: (ha-300623-m03) Waiting for SSH to be available...
	I1026 01:02:00.723096   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723544   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.723574   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.723782   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH client type: external
	I1026 01:02:00.723802   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa (-rw-------)
	I1026 01:02:00.723832   27934 main.go:141] libmachine: (ha-300623-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:02:00.723848   27934 main.go:141] libmachine: (ha-300623-m03) DBG | About to run SSH command:
	I1026 01:02:00.723870   27934 main.go:141] libmachine: (ha-300623-m03) DBG | exit 0
	I1026 01:02:00.849883   27934 main.go:141] libmachine: (ha-300623-m03) DBG | SSH cmd err, output: <nil>: 
	I1026 01:02:00.850375   27934 main.go:141] libmachine: (ha-300623-m03) KVM machine creation complete!
	I1026 01:02:00.850699   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:00.851242   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851412   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:00.851548   27934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:02:00.851566   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetState
	I1026 01:02:00.852882   27934 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:02:00.852898   27934 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:02:00.852910   27934 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:02:00.852920   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.855365   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.855806   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.855828   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.856011   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.856209   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856384   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.856518   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.856737   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.856963   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.856977   27934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:02:00.960586   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:00.960610   27934 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:02:00.960620   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:00.963489   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.963835   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:00.963855   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:00.964027   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:00.964212   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964377   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:00.964520   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:00.964689   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:00.964839   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:00.964850   27934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:02:01.070154   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:02:01.070243   27934 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:02:01.070253   27934 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:02:01.070260   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070494   27934 buildroot.go:166] provisioning hostname "ha-300623-m03"
	I1026 01:02:01.070509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.070670   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.073236   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073643   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.073674   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.073803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.074025   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074141   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.074309   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.074462   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.074668   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.074685   27934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623-m03 && echo "ha-300623-m03" | sudo tee /etc/hostname
	I1026 01:02:01.191755   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623-m03
	
	I1026 01:02:01.191785   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.194565   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.194928   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.194957   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.195106   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.195276   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195444   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.195582   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.195873   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.196084   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.196105   27934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:02:01.305994   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:02:01.306027   27934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:02:01.306044   27934 buildroot.go:174] setting up certificates
	I1026 01:02:01.306053   27934 provision.go:84] configureAuth start
	I1026 01:02:01.306066   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetMachineName
	I1026 01:02:01.306391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.308943   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309271   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.309299   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.309440   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.311607   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.311976   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.312003   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.312212   27934 provision.go:143] copyHostCerts
	I1026 01:02:01.312245   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312277   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:02:01.312286   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:02:01.312350   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:02:01.312423   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312441   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:02:01.312445   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:02:01.312471   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:02:01.312516   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312533   27934 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:02:01.312540   27934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:02:01.312560   27934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:02:01.312651   27934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623-m03 san=[127.0.0.1 192.168.39.180 ha-300623-m03 localhost minikube]
	I1026 01:02:01.465531   27934 provision.go:177] copyRemoteCerts
	I1026 01:02:01.465583   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:02:01.465608   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.468185   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468506   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.468531   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.468753   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.468983   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.469158   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.469293   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.551550   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:02:01.551614   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:02:01.576554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:02:01.576635   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 01:02:01.602350   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:02:01.602435   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:02:01.626219   27934 provision.go:87] duration metric: took 320.153705ms to configureAuth
	I1026 01:02:01.626250   27934 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:02:01.626469   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:01.626540   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.629202   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629541   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.629569   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.629826   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.630038   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630193   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.630349   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.630520   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.630681   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.630695   27934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:02:01.850626   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:02:01.850656   27934 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:02:01.850666   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetURL
	I1026 01:02:01.851985   27934 main.go:141] libmachine: (ha-300623-m03) DBG | Using libvirt version 6000000
	I1026 01:02:01.853953   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854248   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.854277   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.854395   27934 main.go:141] libmachine: Docker is up and running!
	I1026 01:02:01.854410   27934 main.go:141] libmachine: Reticulating splines...
	I1026 01:02:01.854416   27934 client.go:171] duration metric: took 23.935075321s to LocalClient.Create
	I1026 01:02:01.854435   27934 start.go:167] duration metric: took 23.935138215s to libmachine.API.Create "ha-300623"
	I1026 01:02:01.854442   27934 start.go:293] postStartSetup for "ha-300623-m03" (driver="kvm2")
	I1026 01:02:01.854455   27934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:02:01.854473   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:01.854694   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:02:01.854714   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.856743   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857033   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.857061   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.857181   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.857358   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.857509   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.857636   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:01.939727   27934 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:02:01.943512   27934 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:02:01.943536   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:02:01.943602   27934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:02:01.943673   27934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:02:01.943683   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:02:01.943769   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:02:01.952556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:01.974588   27934 start.go:296] duration metric: took 120.131633ms for postStartSetup
	I1026 01:02:01.974635   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetConfigRaw
	I1026 01:02:01.975249   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:01.977630   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.977939   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.977966   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.978201   27934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:02:01.978439   27934 start.go:128] duration metric: took 24.077650452s to createHost
	I1026 01:02:01.978471   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:01.981153   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981663   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:01.981690   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:01.981836   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:01.981994   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982159   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:01.982318   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:01.982480   27934 main.go:141] libmachine: Using SSH client type: native
	I1026 01:02:01.982694   27934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1026 01:02:01.982711   27934 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:02:02.085984   27934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729904522.063699456
	
	I1026 01:02:02.086012   27934 fix.go:216] guest clock: 1729904522.063699456
	I1026 01:02:02.086022   27934 fix.go:229] Guest: 2024-10-26 01:02:02.063699456 +0000 UTC Remote: 2024-10-26 01:02:01.978456379 +0000 UTC m=+140.913817945 (delta=85.243077ms)
	I1026 01:02:02.086043   27934 fix.go:200] guest clock delta is within tolerance: 85.243077ms
	I1026 01:02:02.086049   27934 start.go:83] releasing machines lock for "ha-300623-m03", held for 24.185376811s
	I1026 01:02:02.086067   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.086287   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:02.088913   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.089268   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.089295   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.091504   27934 out.go:177] * Found network options:
	I1026 01:02:02.092955   27934 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.62
	W1026 01:02:02.094206   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.094236   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.094251   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094803   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.094989   27934 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:02:02.095095   27934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:02:02.095133   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	W1026 01:02:02.095154   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	W1026 01:02:02.095180   27934 proxy.go:119] fail to check proxy env: Error ip not in block
	I1026 01:02:02.095247   27934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:02:02.095268   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:02:02.097751   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098028   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098086   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098111   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098235   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098391   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098497   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:02.098514   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:02.098524   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.098666   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.098717   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:02:02.098843   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:02:02.098984   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:02:02.099112   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:02:02.334862   27934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:02:02.340486   27934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:02:02.340547   27934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:02:02.357805   27934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:02:02.357834   27934 start.go:495] detecting cgroup driver to use...
	I1026 01:02:02.357898   27934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:02:02.374996   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:02:02.392000   27934 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:02:02.392086   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:02:02.407807   27934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:02:02.423965   27934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:02:02.552274   27934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:02:02.700711   27934 docker.go:233] disabling docker service ...
	I1026 01:02:02.700771   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:02:02.718236   27934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:02:02.732116   27934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:02:02.868905   27934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:02:02.980683   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:02:02.994225   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:02:03.012791   27934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:02:03.012857   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.023082   27934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:02:03.023153   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.033232   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.045462   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.056259   27934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:02:03.067151   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.077520   27934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.096669   27934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:02:03.106891   27934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:02:03.116392   27934 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:02:03.116458   27934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:02:03.129779   27934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:02:03.139745   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:03.248476   27934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:02:03.335933   27934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:02:03.336001   27934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:02:03.341028   27934 start.go:563] Will wait 60s for crictl version
	I1026 01:02:03.341087   27934 ssh_runner.go:195] Run: which crictl
	I1026 01:02:03.344865   27934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:02:03.384107   27934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:02:03.384182   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.413095   27934 ssh_runner.go:195] Run: crio --version
	I1026 01:02:03.443714   27934 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:02:03.445737   27934 out.go:177]   - env NO_PROXY=192.168.39.183
	I1026 01:02:03.447586   27934 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.62
	I1026 01:02:03.449031   27934 main.go:141] libmachine: (ha-300623-m03) Calling .GetIP
	I1026 01:02:03.452447   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.452878   27934 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:02:03.452917   27934 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:02:03.453179   27934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:02:03.457652   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:03.471067   27934 mustload.go:65] Loading cluster: ha-300623
	I1026 01:02:03.471351   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:03.471669   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.471714   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.487194   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I1026 01:02:03.487657   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.488105   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.488127   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.488437   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.488638   27934 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:02:03.490095   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:03.490500   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:03.490536   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:03.506020   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1026 01:02:03.506418   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:03.506947   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:03.506976   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:03.507350   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:03.507527   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:03.507727   27934 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.180
	I1026 01:02:03.507740   27934 certs.go:194] generating shared ca certs ...
	I1026 01:02:03.507758   27934 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.507883   27934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:02:03.507924   27934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:02:03.507933   27934 certs.go:256] generating profile certs ...
	I1026 01:02:03.508003   27934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:02:03.508028   27934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0
	I1026 01:02:03.508039   27934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:02:03.728822   27934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 ...
	I1026 01:02:03.728854   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0: {Name:mk13b323a89a31df62edb3f93e2caa9ef5c95608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729026   27934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 ...
	I1026 01:02:03.729038   27934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0: {Name:mk931eb52f244ae5eac81e077cce00cf1844fe8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:02:03.729110   27934 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:02:03.729242   27934 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.71a5adc0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:02:03.729367   27934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:02:03.729382   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:02:03.729396   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:02:03.729409   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:02:03.729443   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:02:03.729457   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:02:03.729475   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:02:03.729491   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:02:03.749554   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:02:03.749647   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:02:03.749686   27934 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:02:03.749696   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:02:03.749718   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:02:03.749740   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:02:03.749762   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:02:03.749801   27934 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:02:03.749827   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:03.749842   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:02:03.749854   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:02:03.749890   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:03.752989   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753341   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:03.753364   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:03.753579   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:03.753776   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:03.753920   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:03.754076   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:03.829849   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1026 01:02:03.834830   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1026 01:02:03.846065   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1026 01:02:03.849963   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1026 01:02:03.859787   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1026 01:02:03.863509   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1026 01:02:03.873244   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1026 01:02:03.876871   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1026 01:02:03.892364   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1026 01:02:03.896520   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1026 01:02:03.907397   27934 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1026 01:02:03.911631   27934 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1026 01:02:03.924039   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:02:03.948397   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:02:03.971545   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:02:03.994742   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:02:04.019083   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1026 01:02:04.043193   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:02:04.066431   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:02:04.089556   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:02:04.112422   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:02:04.137648   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:02:04.163111   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:02:04.187974   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1026 01:02:04.204419   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1026 01:02:04.221407   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1026 01:02:04.240446   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1026 01:02:04.258125   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1026 01:02:04.274506   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1026 01:02:04.290927   27934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1026 01:02:04.307309   27934 ssh_runner.go:195] Run: openssl version
	I1026 01:02:04.312975   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:02:04.323808   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328222   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.328286   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:02:04.334015   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:02:04.344665   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:02:04.355274   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359793   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.359862   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:02:04.365345   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:02:04.376251   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:02:04.387304   27934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391720   27934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.391792   27934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:02:04.397948   27934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:02:04.409356   27934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:02:04.413518   27934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:02:04.413569   27934 kubeadm.go:934] updating node {m03 192.168.39.180 8443 v1.31.2 crio true true} ...
	I1026 01:02:04.413666   27934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:02:04.413689   27934 kube-vip.go:115] generating kube-vip config ...
	I1026 01:02:04.413726   27934 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:02:04.429892   27934 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:02:04.429970   27934 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:02:04.430030   27934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.439803   27934 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1026 01:02:04.439857   27934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1026 01:02:04.448847   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1026 01:02:04.448867   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448890   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:04.448924   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1026 01:02:04.448835   27934 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1026 01:02:04.448969   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.449022   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1026 01:02:04.453004   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1026 01:02:04.453036   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1026 01:02:04.477386   27934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.477445   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1026 01:02:04.477465   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1026 01:02:04.477513   27934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1026 01:02:04.523830   27934 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1026 01:02:04.523877   27934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1026 01:02:05.306345   27934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1026 01:02:05.316372   27934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 01:02:05.333527   27934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:02:05.350382   27934 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:02:05.366102   27934 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:02:05.369984   27934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:02:05.381182   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:05.496759   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:05.512263   27934 host.go:66] Checking if "ha-300623" exists ...
	I1026 01:02:05.512689   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:02:05.512740   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:02:05.531279   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1026 01:02:05.531819   27934 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:02:05.532966   27934 main.go:141] libmachine: Using API Version  1
	I1026 01:02:05.532989   27934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:02:05.533339   27934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:02:05.533529   27934 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:02:05.533682   27934 start.go:317] joinCluster: &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:02:05.533839   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1026 01:02:05.533866   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:02:05.536583   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537028   27934 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:02:05.537057   27934 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:02:05.537282   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:02:05.537491   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:02:05.537676   27934 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:02:05.537795   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:02:05.697156   27934 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:05.697206   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443"
	I1026 01:02:29.292626   27934 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v8d8ct.yqbxucpp9erkd2fb --discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-300623-m03 --control-plane --apiserver-advertise-address=192.168.39.180 --apiserver-bind-port=8443": (23.595390034s)
	I1026 01:02:29.292667   27934 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1026 01:02:29.885895   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-300623-m03 minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=ha-300623 minikube.k8s.io/primary=false
	I1026 01:02:29.997019   27934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-300623-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1026 01:02:30.136451   27934 start.go:319] duration metric: took 24.602766496s to joinCluster
	I1026 01:02:30.136544   27934 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:02:30.137000   27934 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:02:30.137905   27934 out.go:177] * Verifying Kubernetes components...
	I1026 01:02:30.139044   27934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:02:30.389764   27934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:02:30.425326   27934 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:02:30.425691   27934 kapi.go:59] client config for ha-300623: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1026 01:02:30.425759   27934 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.183:8443
	I1026 01:02:30.426058   27934 node_ready.go:35] waiting up to 6m0s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:30.426159   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.426170   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.426180   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.426189   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.431156   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:30.926776   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:30.926801   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:30.926811   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:30.926819   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:30.930142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.426736   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.426771   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.426783   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.426791   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.430233   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:31.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:31.926732   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:31.926744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:31.926753   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:31.929704   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:32.426493   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.426514   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.426522   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.426527   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.429836   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:32.430379   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:32.926337   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:32.926363   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:32.926376   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:32.926383   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:32.929516   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:33.426312   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.426334   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.426342   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.426364   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.430395   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:33.927020   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:33.927043   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:33.927050   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:33.927053   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:33.930539   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.426611   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.426637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.426649   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.426653   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.429762   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.926585   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:34.926607   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:34.926616   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:34.926622   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:34.929963   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:34.930447   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:35.426739   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.426760   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.426786   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.426791   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.429676   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:35.926699   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:35.926723   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:35.926731   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:35.926735   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:35.930444   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.427052   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.427063   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.427069   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.430961   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.926688   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:36.926715   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:36.926726   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:36.926732   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:36.930504   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:36.931114   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:37.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.426568   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.426581   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.426588   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.434793   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:37.926670   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:37.926699   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:37.926711   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:37.926717   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:37.929364   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:38.427306   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.427327   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.427335   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.427339   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.434499   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:38.926882   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:38.926902   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:38.926911   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:38.926914   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:38.930831   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:38.931460   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:39.427252   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.427274   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.427283   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.427286   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.430650   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:39.926620   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:39.926643   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:39.926654   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:39.926661   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:39.930077   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.426363   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.426396   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.426408   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.426414   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.429976   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:40.926280   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:40.926310   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:40.926320   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:40.926325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:40.929942   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.426533   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.426556   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.426563   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.426568   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.430315   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:41.431209   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:41.926498   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:41.926522   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:41.926529   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:41.926534   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:41.929738   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.426973   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.427006   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.427013   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.427019   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.430244   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:42.927247   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:42.927275   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:42.927283   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:42.927288   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:42.930906   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.426731   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.426759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.426768   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.426773   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.430712   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:43.431301   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:43.926784   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:43.926823   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:43.926832   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:43.926835   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:43.929957   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.427237   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.427258   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.427266   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.427270   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.430769   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:44.926707   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:44.926731   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:44.926740   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:44.926743   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:44.930247   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.427043   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.427065   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.427074   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.427079   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.430820   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:45.431387   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:45.927275   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:45.927296   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:45.927304   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:45.927306   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:45.930627   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.426245   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.426266   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.426274   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.426278   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.429561   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:46.926352   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:46.926373   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:46.926384   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:46.926390   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:46.929454   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.426420   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.426462   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.426472   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.426477   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.430019   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.926864   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:47.926889   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:47.926900   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:47.926906   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:47.929997   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:47.930569   27934 node_ready.go:53] node "ha-300623-m03" has status "Ready":"False"
	I1026 01:02:48.426656   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.426693   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.426709   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.426716   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.435417   27934 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1026 01:02:48.436037   27934 node_ready.go:49] node "ha-300623-m03" has status "Ready":"True"
	I1026 01:02:48.436062   27934 node_ready.go:38] duration metric: took 18.009981713s for node "ha-300623-m03" to be "Ready" ...
	I1026 01:02:48.436077   27934 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:48.436165   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:48.436180   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.436190   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.436203   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.442639   27934 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1026 01:02:48.450258   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.450343   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ntmgc
	I1026 01:02:48.450349   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.450356   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.450360   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.454261   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.454872   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.454888   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.454895   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.454900   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.459379   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:48.460137   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.460155   27934 pod_ready.go:82] duration metric: took 9.869467ms for pod "coredns-7c65d6cfc9-ntmgc" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460165   27934 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.460215   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qx24f
	I1026 01:02:48.460224   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.460231   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.460233   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.463232   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.463771   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.463783   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.463792   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.463797   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.466281   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.466732   27934 pod_ready.go:93] pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.466748   27934 pod_ready.go:82] duration metric: took 6.577285ms for pod "coredns-7c65d6cfc9-qx24f" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466762   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.466818   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623
	I1026 01:02:48.466826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.466833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.466837   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.469268   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.469931   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:48.469946   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.469953   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.469957   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.472212   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.472664   27934 pod_ready.go:93] pod "etcd-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.472682   27934 pod_ready.go:82] duration metric: took 5.914156ms for pod "etcd-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472691   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.472750   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m02
	I1026 01:02:48.472759   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.472766   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.472770   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.475167   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.475777   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:48.475794   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.475802   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.475806   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.478259   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:48.478687   27934 pod_ready.go:93] pod "etcd-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.478703   27934 pod_ready.go:82] duration metric: took 6.006167ms for pod "etcd-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.478711   27934 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.627599   27934 request.go:632] Waited for 148.830245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627657   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-ha-300623-m03
	I1026 01:02:48.627667   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.627674   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.627680   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.631663   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.827561   27934 request.go:632] Waited for 195.345637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827630   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:48.827637   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:48.827645   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:48.827649   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:48.831042   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:48.831791   27934 pod_ready.go:93] pod "etcd-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:48.831815   27934 pod_ready.go:82] duration metric: took 353.094836ms for pod "etcd-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:48.831835   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.027283   27934 request.go:632] Waited for 195.388128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027360   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623
	I1026 01:02:49.027365   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.027373   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.027380   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.030439   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.227538   27934 request.go:632] Waited for 196.377694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227614   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:49.227627   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.227643   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.227650   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.230823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.231339   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.231360   27934 pod_ready.go:82] duration metric: took 399.517961ms for pod "kube-apiserver-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.231374   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.426746   27934 request.go:632] Waited for 195.299777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426820   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m02
	I1026 01:02:49.426826   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.426833   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.426842   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.430033   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.626896   27934 request.go:632] Waited for 196.298512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626964   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:49.626970   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.626977   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.626980   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.630142   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:49.630626   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:49.630645   27934 pod_ready.go:82] duration metric: took 399.259883ms for pod "kube-apiserver-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.630655   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:49.826666   27934 request.go:632] Waited for 195.934282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826722   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-300623-m03
	I1026 01:02:49.826727   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:49.826739   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:49.826744   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:49.830021   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.027111   27934 request.go:632] Waited for 196.361005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027198   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:50.027210   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.027222   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.027231   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.030533   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.031215   27934 pod_ready.go:93] pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.031238   27934 pod_ready.go:82] duration metric: took 400.574994ms for pod "kube-apiserver-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.031268   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.227253   27934 request.go:632] Waited for 195.903041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227309   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623
	I1026 01:02:50.227314   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.227321   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.227325   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.230415   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.427535   27934 request.go:632] Waited for 196.340381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427594   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:50.427602   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.427612   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.427619   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.430823   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.431395   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.431413   27934 pod_ready.go:82] duration metric: took 400.135776ms for pod "kube-controller-manager-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.431426   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.626990   27934 request.go:632] Waited for 195.470744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627069   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m02
	I1026 01:02:50.627075   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.627082   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.627087   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.630185   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.827370   27934 request.go:632] Waited for 196.34647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827442   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:50.827448   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:50.827455   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:50.827461   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:50.831085   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:50.831842   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:50.831859   27934 pod_ready.go:82] duration metric: took 400.426225ms for pod "kube-controller-manager-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:50.831869   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.027015   27934 request.go:632] Waited for 195.078027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027084   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-300623-m03
	I1026 01:02:51.027092   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.027099   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.027103   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.031047   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.227422   27934 request.go:632] Waited for 195.619523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227479   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:51.227484   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.227492   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.227495   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.231982   27934 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1026 01:02:51.232544   27934 pod_ready.go:93] pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.232570   27934 pod_ready.go:82] duration metric: took 400.691296ms for pod "kube-controller-manager-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.232584   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.427652   27934 request.go:632] Waited for 194.988908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427748   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-65rns
	I1026 01:02:51.427756   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.427763   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.427769   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.431107   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:51.627383   27934 request.go:632] Waited for 195.646071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627443   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:51.627450   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.627459   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.627465   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.630345   27934 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1026 01:02:51.630913   27934 pod_ready.go:93] pod "kube-proxy-65rns" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:51.630940   27934 pod_ready.go:82] duration metric: took 398.33791ms for pod "kube-proxy-65rns" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.630957   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:51.826903   27934 request.go:632] Waited for 195.872288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826976   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7hn2d
	I1026 01:02:51.826981   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:51.826989   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:51.826995   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:51.830596   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.027634   27934 request.go:632] Waited for 196.404478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027720   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:52.027729   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.027740   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.027744   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.031724   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.032488   27934 pod_ready.go:93] pod "kube-proxy-7hn2d" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.032512   27934 pod_ready.go:82] duration metric: took 401.542551ms for pod "kube-proxy-7hn2d" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.032525   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.227636   27934 request.go:632] Waited for 195.035156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mv7sf
	I1026 01:02:52.227697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.227705   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.227713   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.230866   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.426675   27934 request.go:632] Waited for 195.29136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426757   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:52.426765   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.426775   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.426782   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.429979   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.430570   27934 pod_ready.go:93] pod "kube-proxy-mv7sf" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.430594   27934 pod_ready.go:82] duration metric: took 398.058369ms for pod "kube-proxy-mv7sf" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.430608   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.627616   27934 request.go:632] Waited for 196.938648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627691   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623
	I1026 01:02:52.627697   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.627704   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.627709   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.631135   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.827333   27934 request.go:632] Waited for 195.390365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827388   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623
	I1026 01:02:52.827397   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:52.827404   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:52.827409   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:52.830746   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:52.831581   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:52.831599   27934 pod_ready.go:82] duration metric: took 400.983275ms for pod "kube-scheduler-ha-300623" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:52.831611   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.026899   27934 request.go:632] Waited for 195.225563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026954   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m02
	I1026 01:02:53.026959   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.026967   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.026971   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.030270   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.227500   27934 request.go:632] Waited for 196.386112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227559   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m02
	I1026 01:02:53.227564   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.227572   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.227577   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.231336   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.231867   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.231885   27934 pod_ready.go:82] duration metric: took 400.266151ms for pod "kube-scheduler-ha-300623-m02" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.231896   27934 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.426974   27934 request.go:632] Waited for 194.996598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427025   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-300623-m03
	I1026 01:02:53.427030   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.427037   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.427041   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.430377   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.626766   27934 request.go:632] Waited for 195.735993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626824   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/ha-300623-m03
	I1026 01:02:53.626829   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.626836   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.626840   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.630167   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:53.630954   27934 pod_ready.go:93] pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace has status "Ready":"True"
	I1026 01:02:53.630975   27934 pod_ready.go:82] duration metric: took 399.071645ms for pod "kube-scheduler-ha-300623-m03" in "kube-system" namespace to be "Ready" ...
	I1026 01:02:53.630992   27934 pod_ready.go:39] duration metric: took 5.19490109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:02:53.631015   27934 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:02:53.631076   27934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:02:53.646977   27934 api_server.go:72] duration metric: took 23.510394339s to wait for apiserver process to appear ...
	I1026 01:02:53.647007   27934 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:02:53.647030   27934 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1026 01:02:53.651895   27934 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1026 01:02:53.651966   27934 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I1026 01:02:53.651972   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.651979   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.651983   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.652674   27934 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1026 01:02:53.652802   27934 api_server.go:141] control plane version: v1.31.2
	I1026 01:02:53.652821   27934 api_server.go:131] duration metric: took 5.805941ms to wait for apiserver health ...
	I1026 01:02:53.652830   27934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 01:02:53.827168   27934 request.go:632] Waited for 174.273301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827222   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:53.827228   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:53.827235   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:53.827240   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:53.834306   27934 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1026 01:02:53.841838   27934 system_pods.go:59] 24 kube-system pods found
	I1026 01:02:53.841872   27934 system_pods.go:61] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:53.841879   27934 system_pods.go:61] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:53.841885   27934 system_pods.go:61] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:53.841891   27934 system_pods.go:61] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:53.841897   27934 system_pods.go:61] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:53.841901   27934 system_pods.go:61] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:53.841906   27934 system_pods.go:61] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:53.841911   27934 system_pods.go:61] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:53.841916   27934 system_pods.go:61] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:53.841921   27934 system_pods.go:61] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:53.841927   27934 system_pods.go:61] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:53.841932   27934 system_pods.go:61] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:53.841938   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:53.841945   27934 system_pods.go:61] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:53.841951   27934 system_pods.go:61] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:53.841959   27934 system_pods.go:61] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:53.841964   27934 system_pods.go:61] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:53.841970   27934 system_pods.go:61] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:53.841976   27934 system_pods.go:61] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:53.841982   27934 system_pods.go:61] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:53.841992   27934 system_pods.go:61] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:53.841998   27934 system_pods.go:61] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:53.842006   27934 system_pods.go:61] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:53.842011   27934 system_pods.go:61] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:53.842020   27934 system_pods.go:74] duration metric: took 189.182306ms to wait for pod list to return data ...
	I1026 01:02:53.842033   27934 default_sa.go:34] waiting for default service account to be created ...
	I1026 01:02:54.027353   27934 request.go:632] Waited for 185.245125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027412   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I1026 01:02:54.027420   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.027431   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.027441   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.030973   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.031077   27934 default_sa.go:45] found service account: "default"
	I1026 01:02:54.031089   27934 default_sa.go:55] duration metric: took 189.048618ms for default service account to be created ...
	I1026 01:02:54.031098   27934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 01:02:54.227423   27934 request.go:632] Waited for 196.255704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227482   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I1026 01:02:54.227493   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.227507   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.227517   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.232907   27934 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1026 01:02:54.240539   27934 system_pods.go:86] 24 kube-system pods found
	I1026 01:02:54.240565   27934 system_pods.go:89] "coredns-7c65d6cfc9-ntmgc" [b2e07a8a-ed53-4151-9cdd-6345d84fea7d] Running
	I1026 01:02:54.240571   27934 system_pods.go:89] "coredns-7c65d6cfc9-qx24f" [d7fc0eb5-4828-436f-a5c8-8de607f590cf] Running
	I1026 01:02:54.240574   27934 system_pods.go:89] "etcd-ha-300623" [7af25c40-90db-43fb-9d1c-02d3b6092d30] Running
	I1026 01:02:54.240578   27934 system_pods.go:89] "etcd-ha-300623-m02" [5e6978a1-41aa-46dd-a1cd-e02042d4ce04] Running
	I1026 01:02:54.240582   27934 system_pods.go:89] "etcd-ha-300623-m03" [018c3dbe-0bf5-489e-804a-fb1e3195eded] Running
	I1026 01:02:54.240586   27934 system_pods.go:89] "kindnet-2v827" [0a2f3ac1-e6ff-4f8a-83bd-0b8c82e2070b] Running
	I1026 01:02:54.240589   27934 system_pods.go:89] "kindnet-4cqmf" [c887471a-629c-4bf1-9296-8ccb5ba56cd6] Running
	I1026 01:02:54.240592   27934 system_pods.go:89] "kindnet-g5bkb" [0ad4551d-8c28-45b3-9563-03d427208f4f] Running
	I1026 01:02:54.240595   27934 system_pods.go:89] "kube-apiserver-ha-300623" [23f40207-db77-4a02-a2dc-eecea5b1874a] Running
	I1026 01:02:54.240599   27934 system_pods.go:89] "kube-apiserver-ha-300623-m02" [6e2d1aeb-ad12-4328-b4da-6b3a2fd19df0] Running
	I1026 01:02:54.240602   27934 system_pods.go:89] "kube-apiserver-ha-300623-m03" [4f6f2be0-c13c-48d1-b645-719d861bfc9d] Running
	I1026 01:02:54.240606   27934 system_pods.go:89] "kube-controller-manager-ha-300623" [b9c979d4-64e6-473c-b688-295ddd98c379] Running
	I1026 01:02:54.240609   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m02" [4ae0dbcd-d50c-4a53-9347-bed0a06f1f15] Running
	I1026 01:02:54.240613   27934 system_pods.go:89] "kube-controller-manager-ha-300623-m03" [43a89828-44bd-4c39-8656-ce212592e684] Running
	I1026 01:02:54.240616   27934 system_pods.go:89] "kube-proxy-65rns" [895d0bd9-0f38-442f-99a2-6c5c70bddd39] Running
	I1026 01:02:54.240620   27934 system_pods.go:89] "kube-proxy-7hn2d" [8ffc007b-7e17-4810-9f44-f190a8a7d21b] Running
	I1026 01:02:54.240624   27934 system_pods.go:89] "kube-proxy-mv7sf" [687c9b8d-6dc7-46b4-b5c6-dce15b93fe5c] Running
	I1026 01:02:54.240627   27934 system_pods.go:89] "kube-scheduler-ha-300623" [fcbddffd-40d8-4ebd-bf1e-58b1457af487] Running
	I1026 01:02:54.240632   27934 system_pods.go:89] "kube-scheduler-ha-300623-m02" [81664577-53a3-46fd-98f0-5a517d60fc40] Running
	I1026 01:02:54.240635   27934 system_pods.go:89] "kube-scheduler-ha-300623-m03" [4e0f23a0-d27b-4a4f-88cb-9f9fd09cc873] Running
	I1026 01:02:54.240641   27934 system_pods.go:89] "kube-vip-ha-300623" [23c24ab4-cff5-48fa-841b-9567360cbb00] Running
	I1026 01:02:54.240644   27934 system_pods.go:89] "kube-vip-ha-300623-m02" [5e054e06-be47-4fca-bf3d-d0919d31fe23] Running
	I1026 01:02:54.240647   27934 system_pods.go:89] "kube-vip-ha-300623-m03" [e650a523-9ff0-41d2-9446-c84aa4f0b88c] Running
	I1026 01:02:54.240650   27934 system_pods.go:89] "storage-provisioner" [28d286b1-45b3-4775-a8ff-47dc3cb84792] Running
	I1026 01:02:54.240656   27934 system_pods.go:126] duration metric: took 209.550822ms to wait for k8s-apps to be running ...
	I1026 01:02:54.240667   27934 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 01:02:54.240705   27934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:02:54.259476   27934 system_svc.go:56] duration metric: took 18.80003ms WaitForService to wait for kubelet
	I1026 01:02:54.259503   27934 kubeadm.go:582] duration metric: took 24.122925603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:02:54.259520   27934 node_conditions.go:102] verifying NodePressure condition ...
	I1026 01:02:54.427334   27934 request.go:632] Waited for 167.728559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427409   27934 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I1026 01:02:54.427417   27934 round_trippers.go:469] Request Headers:
	I1026 01:02:54.427430   27934 round_trippers.go:473]     Accept: application/json, */*
	I1026 01:02:54.427440   27934 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1026 01:02:54.431191   27934 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1026 01:02:54.432324   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432349   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432365   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432369   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432378   27934 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 01:02:54.432383   27934 node_conditions.go:123] node cpu capacity is 2
	I1026 01:02:54.432391   27934 node_conditions.go:105] duration metric: took 172.867066ms to run NodePressure ...
	I1026 01:02:54.432404   27934 start.go:241] waiting for startup goroutines ...
	I1026 01:02:54.432431   27934 start.go:255] writing updated cluster config ...
	I1026 01:02:54.432784   27934 ssh_runner.go:195] Run: rm -f paused
	I1026 01:02:54.484591   27934 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 01:02:54.487070   27934 out.go:177] * Done! kubectl is now configured to use "ha-300623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.102281102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904812102255082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15428edb-9ac6-4fea-93b8-0d32ae196804 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.102811248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40b88060-f4d0-4970-a1ce-c865bff5aeee name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.102877383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40b88060-f4d0-4970-a1ce-c865bff5aeee name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.103165141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40b88060-f4d0-4970-a1ce-c865bff5aeee name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.138601107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=918b991a-9406-4a6e-b5ac-bc01f0db627a name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.138733241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=918b991a-9406-4a6e-b5ac-bc01f0db627a name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.139820130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6e01990-c88d-4f70-a1d0-907b86ca6857 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.140337123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904812140315205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6e01990-c88d-4f70-a1d0-907b86ca6857 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.140783466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9058a70b-3ec0-4b30-bde9-992e8763b822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.140846722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9058a70b-3ec0-4b30-bde9-992e8763b822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.141263597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9058a70b-3ec0-4b30-bde9-992e8763b822 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.189158206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1743e275-8cae-4a6a-9851-12ab42c2f667 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.189232826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1743e275-8cae-4a6a-9851-12ab42c2f667 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.190704244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=602a6ab1-66cd-40ec-b862-7b97f7c79652 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.191181092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904812191154653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=602a6ab1-66cd-40ec-b862-7b97f7c79652 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.192055320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=accf80ae-3367-4d5b-9cb8-e1a1a7890ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.192122755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=accf80ae-3367-4d5b-9cb8-e1a1a7890ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.192377609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=accf80ae-3367-4d5b-9cb8-e1a1a7890ca9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.230668348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1dc06b1-ec69-483e-b8af-c49355b91914 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.230759697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1dc06b1-ec69-483e-b8af-c49355b91914 name=/runtime.v1.RuntimeService/Version
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.231813266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cb9c889-40d5-4e47-aaa0-5093cb9ea4fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.232234910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904812232214806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cb9c889-40d5-4e47-aaa0-5093cb9ea4fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.232828775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f54425a-fbb4-40f6-af38-08262f753147 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.232879928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f54425a-fbb4-40f6-af38-08262f753147 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 01:06:52 ha-300623 crio[655]: time="2024-10-26 01:06:52.233107760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85cbf0b8850a2112e92fcc3614b8431c369be6d12b745402809010b5c69e6855,PodSandboxId:731eca9181f8bc795aefaf42244496c465f8c1afaa30768bd5843449dde8a254,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1729904578918936204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x8rtl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758,PodSandboxId:20e3c054f64b875efb99887da333e95ea49a8ff1c94c2c80e822d7b7de02b808,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438995903574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ntmgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e07a8a-ed53-4151-9cdd-6345d84fea7d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d,PodSandboxId:d580ea18268bf81fbb705a9ab928aac3ce121e4cb838e5be0d441e9f4eb54e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729904438988403122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qx24f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d7fc0eb5-4828-436f-a5c8-8de607f590cf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:862c0633984db26e703979be6515817dbe5b1bab13be77cbd4231bdb96801841,PodSandboxId:f6635176e0517ab6845f7f76a7bb004a7bcc641b16820b95467aaa56fc567035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1729904437981904808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d286b1-45b3-4775-a8ff-47dc3cb84792,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde,PodSandboxId:cffe8a0cf602c696096b5b98761d406e40098e290f3d08c61ed0a23acddd09cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17299044
25720308757,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4cqmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c887471a-629c-4bf1-9296-8ccb5ba56cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa,PodSandboxId:94078692adcf1c9583bc76363caab5397feaabb0fb65468fe234c4ce6d4ecfb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729904425491717711,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65rns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 895d0bd9-0f38-442f-99a2-6c5c70bddd39,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c,PodSandboxId:620e95994188b7ab83336d4055cc3a9bee8b44280766220f2bfb288a4c0cbb27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1729904415339625152,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410b9cc8959a0fa37bf3160dd4fd727c,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b,PodSandboxId:9b38c5bcef6f69d12003733edd8c1675d5e7b53d90edcb61b99c4ffbd7d3ad06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729904412567756795,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffe5fa9ca4441188a606a24bdbe8722,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3,PodSandboxId:f86f0547d7e3f84c87506a7943db05ea379a666b9ff74ece712b759d0c19b521,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729904412574844578,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3667e64614764ba947adeb95343bcaa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901,PodSandboxId:a63bff1c62868772d73fe6a583a6c74d0bf580e55206f0d33fc1406c2f73f931,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729904412570090151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-300623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755511032387c79ea08c24551165d530,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d,PodSandboxId:e9bc0343ef6690d55ba5f79e46630bcb0d57571d5cec8dd8960ef90403e74166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729904412474137473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-300623,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48b8c6bdc451f81cc4a6c8319036ea10,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f54425a-fbb4-40f6-af38-08262f753147 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85cbf0b8850a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   731eca9181f8b       busybox-7dff88458-x8rtl
	ca2bd9d7fe0a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20e3c054f64b8       coredns-7c65d6cfc9-ntmgc
	56c849c3f6d25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d580ea18268bf       coredns-7c65d6cfc9-qx24f
	862c0633984db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f6635176e0517       storage-provisioner
	d6d0d55128c15       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   cffe8a0cf602c       kindnet-4cqmf
	f7fca08cb5de6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   94078692adcf1       kube-proxy-65rns
	a103c72040168       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   620e95994188b       kube-vip-ha-300623
	47a0b2ec9c50d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   f86f0547d7e3f       kube-controller-manager-ha-300623
	3e321e090fa4b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a63bff1c62868       etcd-ha-300623
	3c25e47b58ddc       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9b38c5bcef6f6       kube-scheduler-ha-300623
	3bcea9b84ac37       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   e9bc0343ef669       kube-apiserver-ha-300623
	
	
	==> coredns [56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d] <==
	[INFO] 10.244.0.4:35752 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083964s
	[INFO] 10.244.0.4:46160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070172s
	[INFO] 10.244.2.2:48496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233704s
	[INFO] 10.244.2.2:43326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002692245s
	[INFO] 10.244.1.2:54632 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145197s
	[INFO] 10.244.1.2:39137 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866788s
	[INFO] 10.244.1.2:37569 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241474s
	[INFO] 10.244.0.4:42983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170463s
	[INFO] 10.244.0.4:34095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002204796s
	[INFO] 10.244.0.4:47258 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001867963s
	[INFO] 10.244.0.4:59491 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141493s
	[INFO] 10.244.0.4:57514 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133403s
	[INFO] 10.244.0.4:45585 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174758s
	[INFO] 10.244.2.2:57387 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165086s
	[INFO] 10.244.2.2:37898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136051s
	[INFO] 10.244.1.2:45240 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130797s
	[INFO] 10.244.1.2:40585 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000259318s
	[INFO] 10.244.1.2:54189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089088s
	[INFO] 10.244.1.2:56872 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108098s
	[INFO] 10.244.0.4:43642 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083444s
	[INFO] 10.244.2.2:37138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161058s
	[INFO] 10.244.1.2:45522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237498s
	[INFO] 10.244.1.2:48964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122296s
	[INFO] 10.244.0.4:46128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168182s
	[INFO] 10.244.0.4:35635 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143147s
	
	
	==> coredns [ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758] <==
	[INFO] 10.244.2.2:54963 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004547023s
	[INFO] 10.244.2.2:34531 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244595s
	[INFO] 10.244.2.2:44217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000362208s
	[INFO] 10.244.2.2:60780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018037s
	[INFO] 10.244.2.2:60725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000259265s
	[INFO] 10.244.2.2:33992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168214s
	[INFO] 10.244.1.2:48441 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000237097s
	[INFO] 10.244.1.2:50414 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002508011s
	[INFO] 10.244.1.2:36962 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211094s
	[INFO] 10.244.1.2:45147 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163251s
	[INFO] 10.244.1.2:56149 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125966s
	[INFO] 10.244.0.4:56735 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092196s
	[INFO] 10.244.0.4:37487 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002015s
	[INFO] 10.244.2.2:53825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125794s
	[INFO] 10.244.2.2:52505 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000213989s
	[INFO] 10.244.0.4:37131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125177s
	[INFO] 10.244.0.4:45742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131329s
	[INFO] 10.244.0.4:52634 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089226s
	[INFO] 10.244.2.2:58146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286556s
	[INFO] 10.244.2.2:59488 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000218728s
	[INFO] 10.244.2.2:51165 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00028421s
	[INFO] 10.244.1.2:37736 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160474s
	[INFO] 10.244.1.2:60585 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000238531s
	[INFO] 10.244.0.4:46233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000078598s
	[INFO] 10.244.0.4:39578 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000277206s
	
	
	==> describe nodes <==
	Name:               ha-300623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_00_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:00:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:22 +0000   Sat, 26 Oct 2024 01:00:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-300623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92684f32bf5c4a5ea50d57cd59f5b8ee
	  System UUID:                92684f32-bf5c-4a5e-a50d-57cd59f5b8ee
	  Boot ID:                    3d5330c9-a2ef-4296-ab11-4c9bb32f97df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x8rtl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 coredns-7c65d6cfc9-ntmgc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-qx24f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-300623                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-4cqmf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m29s
	  kube-system                 kube-apiserver-ha-300623             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-300623    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-65rns                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-300623             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-vip-ha-300623                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m26s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s  kubelet          Node ha-300623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s  kubelet          Node ha-300623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s  kubelet          Node ha-300623 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  NodeReady                6m15s  kubelet          Node ha-300623 status is now: NodeReady
	  Normal  RegisteredNode           5m31s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	  Normal  RegisteredNode           4m17s  node-controller  Node ha-300623 event: Registered Node ha-300623 in Controller
	
	
	Name:               ha-300623-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_01_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:01:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 26 Oct 2024 01:03:16 +0000   Sat, 26 Oct 2024 01:04:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-300623-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 619e0e81a0ef43a9b2e79bbc4eb9355e
	  System UUID:                619e0e81-a0ef-43a9-b2e7-9bbc4eb9355e
	  Boot ID:                    89b92f6c-664b-4721-8f8c-216a0ad0c2d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qtdcl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-300623-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m37s
	  kube-system                 kindnet-g5bkb                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m39s
	  kube-system                 kube-apiserver-ha-300623-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-ha-300623-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-7hn2d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-ha-300623-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-300623-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m39s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m39s)  kubelet          Node ha-300623-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m39s)  kubelet          Node ha-300623-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-300623-m02 event: Registered Node ha-300623-m02 in Controller
	  Normal  NodeNotReady             2m5s                   node-controller  Node ha-300623-m02 status is now: NodeNotReady
	
	
	Name:               ha-300623-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_02_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:02:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:03:27 +0000   Sat, 26 Oct 2024 01:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    ha-300623-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97987e99f2594f70b58fe3aa149b6c7c
	  System UUID:                97987e99-f259-4f70-b58f-e3aa149b6c7c
	  Boot ID:                    7e140c77-fbc1-46f9-addb-72cf937d1703
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mbn94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-300623-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m24s
	  kube-system                 kindnet-2v827                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-300623-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-300623-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-mv7sf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-300623-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-vip-ha-300623-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-300623-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-300623-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-300623-m03 event: Registered Node ha-300623-m03 in Controller
	
	
	Name:               ha-300623-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-300623-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=ha-300623
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_26T01_03_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-300623-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:06:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:04:03 +0000   Sat, 26 Oct 2024 01:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-300623-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 505edce099ab4a75b83037ad7ab46771
	  System UUID:                505edce0-99ab-4a75-b830-37ad7ab46771
	  Boot ID:                    896f9280-eb70-46a8-9d85-c3814086494a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsnn6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m19s
	  kube-system                 kube-proxy-4zk2k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m20s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m20s)  kubelet          Node ha-300623-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m20s)  kubelet          Node ha-300623-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-300623-m04 event: Registered Node ha-300623-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-300623-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050258] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782226] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.951939] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.521399] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct26 01:00] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.061621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060766] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.166618] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145628] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268359] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +3.874441] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.666530] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060776] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.257866] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.091250] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.528305] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.572352] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 01:01] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901] <==
	{"level":"warn","ts":"2024-10-26T01:06:52.359000Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.459099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.470358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.478573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.482177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.492349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.498380Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.506121Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.510269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.513043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.521662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.548907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.559729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.567135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.575975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.583911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.590379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.595906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.600981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.604108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.606701Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.610058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.616409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.624746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-26T01:06:52.658793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f87838631c8138de","from":"f87838631c8138de","remote-peer-id":"adb75fb507768275","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:06:52 up 7 min,  0 users,  load average: 0.18, 0.24, 0.13
	Linux ha-300623 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde] <==
	I1026 01:06:17.175228       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:27.175173       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:27.175288       1 main.go:300] handling current node
	I1026 01:06:27.175317       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:27.175335       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:27.175551       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:27.175580       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:27.175762       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:27.175795       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:37.177801       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:37.177885       1 main.go:300] handling current node
	I1026 01:06:37.177904       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:37.177911       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:37.178155       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:37.178179       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:37.178289       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:37.178308       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	I1026 01:06:47.182761       1 main.go:296] Handling node with IPs: map[192.168.39.183:{}]
	I1026 01:06:47.182815       1 main.go:300] handling current node
	I1026 01:06:47.182832       1 main.go:296] Handling node with IPs: map[192.168.39.62:{}]
	I1026 01:06:47.182839       1 main.go:323] Node ha-300623-m02 has CIDR [10.244.1.0/24] 
	I1026 01:06:47.183048       1 main.go:296] Handling node with IPs: map[192.168.39.180:{}]
	I1026 01:06:47.183073       1 main.go:323] Node ha-300623-m03 has CIDR [10.244.2.0/24] 
	I1026 01:06:47.183223       1 main.go:296] Handling node with IPs: map[192.168.39.197:{}]
	I1026 01:06:47.183245       1 main.go:323] Node ha-300623-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d] <==
	W1026 01:00:17.926981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.183]
	I1026 01:00:17.928181       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 01:00:17.935826       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:00:17.947904       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 01:00:18.894624       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 01:00:18.916292       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:00:19.043184       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 01:00:23.502518       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:00:23.580105       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1026 01:03:00.396346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48596: use of closed network connection
	E1026 01:03:00.597696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48608: use of closed network connection
	E1026 01:03:00.779383       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48638: use of closed network connection
	E1026 01:03:00.968960       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48650: use of closed network connection
	E1026 01:03:01.159859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48672: use of closed network connection
	E1026 01:03:01.356945       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48682: use of closed network connection
	E1026 01:03:01.529718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1026 01:03:01.709409       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60606: use of closed network connection
	E1026 01:03:01.891333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60636: use of closed network connection
	E1026 01:03:02.183836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60668: use of closed network connection
	E1026 01:03:02.371592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60688: use of closed network connection
	E1026 01:03:02.545427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60698: use of closed network connection
	E1026 01:03:02.716320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60708: use of closed network connection
	E1026 01:03:02.895527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60734: use of closed network connection
	E1026 01:03:03.082972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60756: use of closed network connection
	W1026 01:04:27.938129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.180 192.168.39.183]
	
	
	==> kube-controller-manager [47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3] <==
	I1026 01:03:33.037458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.051536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:33.162489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	E1026 01:03:33.296244       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"ff6c8323-43e2-4224-a2c5-fbee23186204\", ResourceVersion:\"911\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 26, 1, 0, 19, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\",
\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\"
:\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b16180), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\
", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641908), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeCl
aimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641920), EmptyDir:(*v1.EmptyDirVolumeSource)(n
il), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVo
lumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002641938), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azur
eFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b161a0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSou
rce)(0xc001b161e0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false,
RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc002a7eba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContai
ner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002879af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002835100), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ove
rhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0029fa100)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002879b40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1026 01:03:33.604085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:35.173961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.911095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:36.978536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.761108       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-300623-m04"
	I1026 01:03:37.763013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:37.822795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:43.288569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:52.993775       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:03:52.994235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:53.016162       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:03:55.127200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:03.835355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m04"
	I1026 01:04:47.785209       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-300623-m04"
	I1026 01:04:47.785779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.821461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:47.859957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.530512ms"
	I1026 01:04:47.860782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.115µs"
	I1026 01:04:50.162222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	I1026 01:04:52.952538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-300623-m02"
	
	
	==> kube-proxy [f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 01:00:25.689413       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 01:00:25.723767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	E1026 01:00:25.723854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 01:00:25.758166       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 01:00:25.758214       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 01:00:25.758247       1 server_linux.go:169] "Using iptables Proxier"
	I1026 01:00:25.760715       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 01:00:25.761068       1 server.go:483] "Version info" version="v1.31.2"
	I1026 01:00:25.761102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:00:25.763718       1 config.go:199] "Starting service config controller"
	I1026 01:00:25.763757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 01:00:25.763790       1 config.go:105] "Starting endpoint slice config controller"
	I1026 01:00:25.763796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 01:00:25.764426       1 config.go:328] "Starting node config controller"
	I1026 01:00:25.764461       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 01:00:25.864157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 01:00:25.864237       1 shared_informer.go:320] Caches are synced for service config
	I1026 01:00:25.864661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b] <==
	I1026 01:02:26.440503       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2v827" node="ha-300623-m03"
	E1026 01:02:55.345123       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.345196       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1d2aa5b5-e44c-4423-a263-a19406face68(default/busybox-7dff88458-qtdcl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qtdcl"
	E1026 01:02:55.345218       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qtdcl\": pod busybox-7dff88458-qtdcl is already assigned to node \"ha-300623-m02\"" pod="default/busybox-7dff88458-qtdcl"
	I1026 01:02:55.345275       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qtdcl" node="ha-300623-m02"
	E1026 01:02:55.394267       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394343       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6ce948a6-f1ee-46b3-9d7c-483c6cd9e8f5(default/busybox-7dff88458-x8rtl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x8rtl"
	E1026 01:02:55.394364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x8rtl\": pod busybox-7dff88458-x8rtl is already assigned to node \"ha-300623\"" pod="default/busybox-7dff88458-x8rtl"
	I1026 01:02:55.394386       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x8rtl" node="ha-300623"
	E1026 01:02:55.394962       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:02:55.395010       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod dd5257f3-d0ba-4672-9836-da890e32fb0d(default/busybox-7dff88458-mbn94) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-mbn94"
	E1026 01:02:55.395023       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-mbn94\": pod busybox-7dff88458-mbn94 is already assigned to node \"ha-300623-m03\"" pod="default/busybox-7dff88458-mbn94"
	I1026 01:02:55.395037       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-mbn94" node="ha-300623-m03"
	E1026 01:03:33.099592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.101341       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8e40741c-73a0-41fa-b38f-a59fed42525b(kube-system/kube-proxy-4zk2k) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4zk2k"
	E1026 01:03:33.101520       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4zk2k\": pod kube-proxy-4zk2k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-4zk2k"
	I1026 01:03:33.101594       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4zk2k" node="ha-300623-m04"
	E1026 01:03:33.102404       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.109277       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 952ba5f9-93b1-4543-8b73-3ac1600315fc(kube-system/kindnet-l58kk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-l58kk"
	E1026 01:03:33.109487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l58kk\": pod kindnet-l58kk is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-l58kk"
	I1026 01:03:33.109689       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-l58kk" node="ha-300623-m04"
	E1026 01:03:33.136820       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5lm6x" node="ha-300623-m04"
	E1026 01:03:33.137312       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5lm6x\": pod kindnet-5lm6x is already assigned to node \"ha-300623-m04\"" pod="kube-system/kindnet-5lm6x"
	E1026 01:03:33.152104       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jhv9k" node="ha-300623-m04"
	E1026 01:03:33.153545       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jhv9k\": pod kube-proxy-jhv9k is already assigned to node \"ha-300623-m04\"" pod="kube-system/kube-proxy-jhv9k"
	
	
	==> kubelet <==
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171492    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:19 ha-300623 kubelet[1306]: E1026 01:05:19.171604    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904719170828944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173388    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:29 ha-300623 kubelet[1306]: E1026 01:05:29.173412    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904729173040296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176311    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:39 ha-300623 kubelet[1306]: E1026 01:05:39.176778    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904739175567800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179258    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:49 ha-300623 kubelet[1306]: E1026 01:05:49.179567    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904749178892500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181750    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:05:59 ha-300623 kubelet[1306]: E1026 01:05:59.181791    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904759181221897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183203    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:09 ha-300623 kubelet[1306]: E1026 01:06:09.183277    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904769182765460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.106419    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 01:06:19 ha-300623 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 01:06:19 ha-300623 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185785    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:19 ha-300623 kubelet[1306]: E1026 01:06:19.185827    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904779185440641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188435    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:29 ha-300623 kubelet[1306]: E1026 01:06:29.188477    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904789187815376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190241    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:39 ha-300623 kubelet[1306]: E1026 01:06:39.190296    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904799189890933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:49 ha-300623 kubelet[1306]: E1026 01:06:49.192194    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904809191813316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 01:06:49 ha-300623 kubelet[1306]: E1026 01:06:49.192231    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729904809191813316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (415.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-300623 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-300623 -v=7 --alsologtostderr
E1026 01:08:52.962092   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-300623 -v=7 --alsologtostderr: exit status 82 (2m1.796102097s)

                                                
                                                
-- stdout --
	* Stopping node "ha-300623-m04"  ...
	* Stopping node "ha-300623-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:06:53.676278   33160 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:06:53.676405   33160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:06:53.676416   33160 out.go:358] Setting ErrFile to fd 2...
	I1026 01:06:53.676422   33160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:06:53.676591   33160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:06:53.676802   33160 out.go:352] Setting JSON to false
	I1026 01:06:53.676895   33160 mustload.go:65] Loading cluster: ha-300623
	I1026 01:06:53.677277   33160 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:06:53.677368   33160 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:06:53.677569   33160 mustload.go:65] Loading cluster: ha-300623
	I1026 01:06:53.677704   33160 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:06:53.677733   33160 stop.go:39] StopHost: ha-300623-m04
	I1026 01:06:53.678161   33160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:06:53.678216   33160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:06:53.693272   33160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I1026 01:06:53.693725   33160 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:06:53.694258   33160 main.go:141] libmachine: Using API Version  1
	I1026 01:06:53.694284   33160 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:06:53.694634   33160 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:06:53.696838   33160 out.go:177] * Stopping node "ha-300623-m04"  ...
	I1026 01:06:53.697959   33160 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:06:53.697986   33160 main.go:141] libmachine: (ha-300623-m04) Calling .DriverName
	I1026 01:06:53.698218   33160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:06:53.698246   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHHostname
	I1026 01:06:53.700708   33160 main.go:141] libmachine: (ha-300623-m04) DBG | domain ha-300623-m04 has defined MAC address 52:54:00:96:9f:e2 in network mk-ha-300623
	I1026 01:06:53.701102   33160 main.go:141] libmachine: (ha-300623-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9f:e2", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:03:18 +0000 UTC Type:0 Mac:52:54:00:96:9f:e2 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-300623-m04 Clientid:01:52:54:00:96:9f:e2}
	I1026 01:06:53.701131   33160 main.go:141] libmachine: (ha-300623-m04) DBG | domain ha-300623-m04 has defined IP address 192.168.39.197 and MAC address 52:54:00:96:9f:e2 in network mk-ha-300623
	I1026 01:06:53.701274   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHPort
	I1026 01:06:53.701460   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHKeyPath
	I1026 01:06:53.701611   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHUsername
	I1026 01:06:53.701806   33160 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m04/id_rsa Username:docker}
	I1026 01:06:53.794150   33160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:06:53.847721   33160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:06:53.901775   33160 main.go:141] libmachine: Stopping "ha-300623-m04"...
	I1026 01:06:53.901811   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetState
	I1026 01:06:53.903484   33160 main.go:141] libmachine: (ha-300623-m04) Calling .Stop
	I1026 01:06:53.907237   33160 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 0/120
	I1026 01:06:55.005350   33160 main.go:141] libmachine: (ha-300623-m04) Calling .GetState
	I1026 01:06:55.006763   33160 main.go:141] libmachine: Machine "ha-300623-m04" was stopped.
	I1026 01:06:55.006779   33160 stop.go:75] duration metric: took 1.308829289s to stop
	I1026 01:06:55.006798   33160 stop.go:39] StopHost: ha-300623-m03
	I1026 01:06:55.007083   33160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:06:55.007119   33160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:06:55.022005   33160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I1026 01:06:55.022407   33160 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:06:55.022869   33160 main.go:141] libmachine: Using API Version  1
	I1026 01:06:55.022889   33160 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:06:55.023225   33160 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:06:55.025006   33160 out.go:177] * Stopping node "ha-300623-m03"  ...
	I1026 01:06:55.026088   33160 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:06:55.026111   33160 main.go:141] libmachine: (ha-300623-m03) Calling .DriverName
	I1026 01:06:55.026305   33160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:06:55.026327   33160 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHHostname
	I1026 01:06:55.029081   33160 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:06:55.029595   33160 main.go:141] libmachine: (ha-300623-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:38:db", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:01:52 +0000 UTC Type:0 Mac:52:54:00:c1:38:db Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-300623-m03 Clientid:01:52:54:00:c1:38:db}
	I1026 01:06:55.029633   33160 main.go:141] libmachine: (ha-300623-m03) DBG | domain ha-300623-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:c1:38:db in network mk-ha-300623
	I1026 01:06:55.029778   33160 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHPort
	I1026 01:06:55.029948   33160 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHKeyPath
	I1026 01:06:55.030082   33160 main.go:141] libmachine: (ha-300623-m03) Calling .GetSSHUsername
	I1026 01:06:55.030223   33160 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m03/id_rsa Username:docker}
	I1026 01:06:55.117694   33160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:06:55.171576   33160 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:06:55.226284   33160 main.go:141] libmachine: Stopping "ha-300623-m03"...
	I1026 01:06:55.226319   33160 main.go:141] libmachine: (ha-300623-m03) Calling .GetState
	I1026 01:06:55.227937   33160 main.go:141] libmachine: (ha-300623-m03) Calling .Stop
	I1026 01:06:55.231470   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 0/120
	I1026 01:06:56.232778   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 1/120
	I1026 01:06:57.234085   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 2/120
	I1026 01:06:58.236422   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 3/120
	I1026 01:06:59.237925   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 4/120
	I1026 01:07:00.239861   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 5/120
	I1026 01:07:01.241453   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 6/120
	I1026 01:07:02.242837   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 7/120
	I1026 01:07:03.244363   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 8/120
	I1026 01:07:04.245553   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 9/120
	I1026 01:07:05.247484   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 10/120
	I1026 01:07:06.248764   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 11/120
	I1026 01:07:07.250428   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 12/120
	I1026 01:07:08.251766   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 13/120
	I1026 01:07:09.253775   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 14/120
	I1026 01:07:10.255569   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 15/120
	I1026 01:07:11.257113   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 16/120
	I1026 01:07:12.258542   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 17/120
	I1026 01:07:13.260550   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 18/120
	I1026 01:07:14.261924   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 19/120
	I1026 01:07:15.263803   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 20/120
	I1026 01:07:16.265443   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 21/120
	I1026 01:07:17.266955   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 22/120
	I1026 01:07:18.268710   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 23/120
	I1026 01:07:19.270196   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 24/120
	I1026 01:07:20.272027   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 25/120
	I1026 01:07:21.273691   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 26/120
	I1026 01:07:22.275995   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 27/120
	I1026 01:07:23.277575   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 28/120
	I1026 01:07:24.279970   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 29/120
	I1026 01:07:25.281668   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 30/120
	I1026 01:07:26.282824   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 31/120
	I1026 01:07:27.284394   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 32/120
	I1026 01:07:28.285692   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 33/120
	I1026 01:07:29.287387   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 34/120
	I1026 01:07:30.289196   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 35/120
	I1026 01:07:31.290644   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 36/120
	I1026 01:07:32.292172   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 37/120
	I1026 01:07:33.293640   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 38/120
	I1026 01:07:34.295845   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 39/120
	I1026 01:07:35.297673   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 40/120
	I1026 01:07:36.299107   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 41/120
	I1026 01:07:37.300615   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 42/120
	I1026 01:07:38.302054   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 43/120
	I1026 01:07:39.303256   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 44/120
	I1026 01:07:40.305156   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 45/120
	I1026 01:07:41.306562   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 46/120
	I1026 01:07:42.307920   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 47/120
	I1026 01:07:43.309245   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 48/120
	I1026 01:07:44.311004   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 49/120
	I1026 01:07:45.312875   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 50/120
	I1026 01:07:46.314256   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 51/120
	I1026 01:07:47.316244   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 52/120
	I1026 01:07:48.317803   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 53/120
	I1026 01:07:49.319140   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 54/120
	I1026 01:07:50.320963   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 55/120
	I1026 01:07:51.322385   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 56/120
	I1026 01:07:52.323609   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 57/120
	I1026 01:07:53.324992   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 58/120
	I1026 01:07:54.326250   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 59/120
	I1026 01:07:55.328185   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 60/120
	I1026 01:07:56.329492   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 61/120
	I1026 01:07:57.330978   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 62/120
	I1026 01:07:58.332299   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 63/120
	I1026 01:07:59.333852   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 64/120
	I1026 01:08:00.335640   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 65/120
	I1026 01:08:01.337570   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 66/120
	I1026 01:08:02.339223   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 67/120
	I1026 01:08:03.340500   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 68/120
	I1026 01:08:04.342322   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 69/120
	I1026 01:08:05.344199   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 70/120
	I1026 01:08:06.345560   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 71/120
	I1026 01:08:07.347199   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 72/120
	I1026 01:08:08.348653   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 73/120
	I1026 01:08:09.350157   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 74/120
	I1026 01:08:10.352212   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 75/120
	I1026 01:08:11.353490   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 76/120
	I1026 01:08:12.355383   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 77/120
	I1026 01:08:13.356767   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 78/120
	I1026 01:08:14.358248   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 79/120
	I1026 01:08:15.360119   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 80/120
	I1026 01:08:16.361476   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 81/120
	I1026 01:08:17.362872   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 82/120
	I1026 01:08:18.364158   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 83/120
	I1026 01:08:19.365540   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 84/120
	I1026 01:08:20.366833   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 85/120
	I1026 01:08:21.368282   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 86/120
	I1026 01:08:22.370383   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 87/120
	I1026 01:08:23.371982   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 88/120
	I1026 01:08:24.373356   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 89/120
	I1026 01:08:25.375285   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 90/120
	I1026 01:08:26.376655   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 91/120
	I1026 01:08:27.378384   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 92/120
	I1026 01:08:28.379622   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 93/120
	I1026 01:08:29.380994   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 94/120
	I1026 01:08:30.382838   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 95/120
	I1026 01:08:31.384357   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 96/120
	I1026 01:08:32.385868   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 97/120
	I1026 01:08:33.387347   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 98/120
	I1026 01:08:34.388743   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 99/120
	I1026 01:08:35.390404   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 100/120
	I1026 01:08:36.391699   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 101/120
	I1026 01:08:37.392817   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 102/120
	I1026 01:08:38.394287   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 103/120
	I1026 01:08:39.395730   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 104/120
	I1026 01:08:40.397273   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 105/120
	I1026 01:08:41.399176   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 106/120
	I1026 01:08:42.400739   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 107/120
	I1026 01:08:43.402199   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 108/120
	I1026 01:08:44.403661   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 109/120
	I1026 01:08:45.405199   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 110/120
	I1026 01:08:46.406592   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 111/120
	I1026 01:08:47.408117   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 112/120
	I1026 01:08:48.410182   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 113/120
	I1026 01:08:49.412015   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 114/120
	I1026 01:08:50.413938   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 115/120
	I1026 01:08:51.415261   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 116/120
	I1026 01:08:52.417033   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 117/120
	I1026 01:08:53.418412   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 118/120
	I1026 01:08:54.419782   33160 main.go:141] libmachine: (ha-300623-m03) Waiting for machine to stop 119/120
	I1026 01:08:55.420531   33160 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 01:08:55.420575   33160 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1026 01:08:55.422469   33160 out.go:201] 
	W1026 01:08:55.423759   33160 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1026 01:08:55.423777   33160 out.go:270] * 
	* 
	W1026 01:08:55.426080   33160 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:08:55.427595   33160 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-300623 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-300623 --wait=true -v=7 --alsologtostderr
E1026 01:09:20.665663   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:11:37.284957   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:13:00.350003   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-300623 --wait=true -v=7 --alsologtostderr: (4m51.142626557s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-300623
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (2.128681198s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-300623 node start m02 -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-300623 -v=7                                                           | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-300623 -v=7                                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-300623 --wait=true -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:08 UTC | 26 Oct 24 01:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-300623                                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:13 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 01:08:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:08:55.477669   33649 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:08:55.477913   33649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:08:55.477923   33649 out.go:358] Setting ErrFile to fd 2...
	I1026 01:08:55.477927   33649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:08:55.478117   33649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:08:55.478660   33649 out.go:352] Setting JSON to false
	I1026 01:08:55.479562   33649 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3075,"bootTime":1729901860,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:08:55.479629   33649 start.go:139] virtualization: kvm guest
	I1026 01:08:55.482520   33649 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:08:55.483678   33649 notify.go:220] Checking for updates...
	I1026 01:08:55.483703   33649 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:08:55.484974   33649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:08:55.486089   33649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:08:55.487172   33649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:08:55.488141   33649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:08:55.489202   33649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:08:55.490700   33649 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:08:55.490781   33649 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:08:55.491300   33649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:08:55.491340   33649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:08:55.506123   33649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I1026 01:08:55.506713   33649 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:08:55.507333   33649 main.go:141] libmachine: Using API Version  1
	I1026 01:08:55.507349   33649 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:08:55.507741   33649 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:08:55.507943   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.543879   33649 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 01:08:55.544920   33649 start.go:297] selected driver: kvm2
	I1026 01:08:55.544932   33649 start.go:901] validating driver "kvm2" against &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:08:55.545078   33649 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:08:55.545380   33649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:08:55.545493   33649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:08:55.560486   33649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:08:55.561170   33649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:08:55.561203   33649 cni.go:84] Creating CNI manager for ""
	I1026 01:08:55.561261   33649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 01:08:55.561316   33649 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:08:55.561484   33649 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:08:55.563355   33649 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 01:08:55.564488   33649 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:08:55.564532   33649 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 01:08:55.564544   33649 cache.go:56] Caching tarball of preloaded images
	I1026 01:08:55.564614   33649 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:08:55.564627   33649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:08:55.564746   33649 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:08:55.564974   33649 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:08:55.565023   33649 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "ha-300623"
	I1026 01:08:55.565042   33649 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:08:55.565054   33649 fix.go:54] fixHost starting: 
	I1026 01:08:55.565332   33649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:08:55.565365   33649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:08:55.579187   33649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I1026 01:08:55.579659   33649 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:08:55.580169   33649 main.go:141] libmachine: Using API Version  1
	I1026 01:08:55.580187   33649 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:08:55.580509   33649 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:08:55.580676   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.580824   33649 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:08:55.582217   33649 fix.go:112] recreateIfNeeded on ha-300623: state=Running err=<nil>
	W1026 01:08:55.582234   33649 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:08:55.583853   33649 out.go:177] * Updating the running kvm2 "ha-300623" VM ...
	I1026 01:08:55.584977   33649 machine.go:93] provisionDockerMachine start ...
	I1026 01:08:55.584993   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.585201   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.587663   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.588091   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.588124   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.588250   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.588423   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.588568   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.588657   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.588789   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.588968   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.588979   33649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:08:55.702197   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:08:55.702224   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.702430   33649 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:08:55.702443   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.702588   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.705133   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.705602   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.705630   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.705811   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.705993   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.706156   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.706247   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.706394   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.706618   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.706635   33649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:08:55.833261   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:08:55.833295   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.835711   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.836022   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.836060   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.836321   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.836494   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.836625   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.836744   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.836943   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.837111   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.837126   33649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:08:55.950368   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:08:55.950413   33649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:08:55.950480   33649 buildroot.go:174] setting up certificates
	I1026 01:08:55.950495   33649 provision.go:84] configureAuth start
	I1026 01:08:55.950517   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.950846   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:08:55.953486   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.953868   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.953900   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.954083   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.956415   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.956776   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.956804   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.956947   33649 provision.go:143] copyHostCerts
	I1026 01:08:55.956974   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:08:55.957018   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:08:55.957031   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:08:55.957105   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:08:55.957211   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:08:55.957231   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:08:55.957237   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:08:55.957264   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:08:55.957395   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:08:55.957438   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:08:55.957446   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:08:55.957493   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:08:55.957559   33649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:08:56.205633   33649 provision.go:177] copyRemoteCerts
	I1026 01:08:56.205687   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:08:56.205709   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:56.208132   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.208425   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:56.208448   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.208594   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:56.208748   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.208884   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:56.209038   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:08:56.295385   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:08:56.295467   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:08:56.321337   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:08:56.321448   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:08:56.350661   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:08:56.350743   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:08:56.380006   33649 provision.go:87] duration metric: took 429.493351ms to configureAuth
	I1026 01:08:56.380034   33649 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:08:56.380236   33649 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:08:56.380312   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:56.382788   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.383158   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:56.383185   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.383361   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:56.383538   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.383702   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.383804   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:56.383986   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:56.384179   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:56.384195   33649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:10:27.107496   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:10:27.107523   33649 machine.go:96] duration metric: took 1m31.522533775s to provisionDockerMachine
	I1026 01:10:27.107535   33649 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:10:27.107547   33649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:10:27.107568   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.107919   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:10:27.107949   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.110959   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.111308   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.111332   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.111495   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.111686   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.111857   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.111983   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.201316   33649 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:10:27.205477   33649 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:10:27.205508   33649 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:10:27.205588   33649 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:10:27.205706   33649 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:10:27.205719   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:10:27.205839   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:10:27.214970   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:10:27.237645   33649 start.go:296] duration metric: took 130.093775ms for postStartSetup
	I1026 01:10:27.237689   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.237955   33649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1026 01:10:27.237977   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.240769   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.241283   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.241311   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.241496   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.241694   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.241844   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.241961   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	W1026 01:10:27.328117   33649 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1026 01:10:27.328142   33649 fix.go:56] duration metric: took 1m31.763089862s for fixHost
	I1026 01:10:27.328166   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.330818   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.331182   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.331210   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.331317   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.331494   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.331628   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.331737   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.331860   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:10:27.332049   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:10:27.332063   33649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:10:27.441936   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729905027.399947841
	
	I1026 01:10:27.441956   33649 fix.go:216] guest clock: 1729905027.399947841
	I1026 01:10:27.441968   33649 fix.go:229] Guest: 2024-10-26 01:10:27.399947841 +0000 UTC Remote: 2024-10-26 01:10:27.328149088 +0000 UTC m=+91.889873341 (delta=71.798753ms)
	I1026 01:10:27.442007   33649 fix.go:200] guest clock delta is within tolerance: 71.798753ms
	I1026 01:10:27.442013   33649 start.go:83] releasing machines lock for "ha-300623", held for 1m31.87697823s
	I1026 01:10:27.442029   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.442284   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:10:27.444841   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.445176   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.445196   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.445384   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.445870   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.446021   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.446086   33649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:10:27.446139   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.446191   33649 ssh_runner.go:195] Run: cat /version.json
	I1026 01:10:27.446215   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.448636   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.448934   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.448960   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449009   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449138   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.449305   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.449398   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.449444   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449448   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.449606   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.449599   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.449784   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.449963   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.450094   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.530809   33649 ssh_runner.go:195] Run: systemctl --version
	I1026 01:10:27.556941   33649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:10:27.716547   33649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:10:27.725391   33649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:10:27.725496   33649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:10:27.734551   33649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:10:27.734575   33649 start.go:495] detecting cgroup driver to use...
	I1026 01:10:27.734653   33649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:10:27.750749   33649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:10:27.765207   33649 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:10:27.765266   33649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:10:27.779013   33649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:10:27.792254   33649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:10:27.943873   33649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:10:28.086730   33649 docker.go:233] disabling docker service ...
	I1026 01:10:28.086810   33649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:10:28.103469   33649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:10:28.116868   33649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:10:28.291455   33649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:10:28.452338   33649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:10:28.466471   33649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:10:28.485805   33649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:10:28.485869   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.496181   33649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:10:28.496248   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.506301   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.516784   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.527104   33649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:10:28.537732   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.548364   33649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.559572   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.569724   33649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:10:28.579314   33649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:10:28.589200   33649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:10:28.738121   33649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:10:29.494343   33649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:10:29.494419   33649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:10:29.500059   33649 start.go:563] Will wait 60s for crictl version
	I1026 01:10:29.500121   33649 ssh_runner.go:195] Run: which crictl
	I1026 01:10:29.503676   33649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:10:29.540001   33649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:10:29.540090   33649 ssh_runner.go:195] Run: crio --version
	I1026 01:10:29.567687   33649 ssh_runner.go:195] Run: crio --version
	I1026 01:10:29.597322   33649 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:10:29.598784   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:10:29.601552   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:29.602219   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:29.602241   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:29.602529   33649 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:10:29.607026   33649 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:10:29.607152   33649 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:10:29.607195   33649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:10:29.651349   33649 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:10:29.651378   33649 crio.go:433] Images already preloaded, skipping extraction
	I1026 01:10:29.651446   33649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:10:29.682582   33649 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:10:29.682604   33649 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:10:29.682616   33649 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:10:29.682754   33649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:10:29.682840   33649 ssh_runner.go:195] Run: crio config
	I1026 01:10:29.731311   33649 cni.go:84] Creating CNI manager for ""
	I1026 01:10:29.731333   33649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 01:10:29.731343   33649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:10:29.731376   33649 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:10:29.731524   33649 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:10:29.731546   33649 kube-vip.go:115] generating kube-vip config ...
	I1026 01:10:29.731585   33649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:10:29.742547   33649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:10:29.742662   33649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:10:29.742725   33649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:10:29.752124   33649 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:10:29.752208   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:10:29.761412   33649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:10:29.777702   33649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:10:29.793807   33649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:10:29.810417   33649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:10:29.827763   33649 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:10:29.832629   33649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:10:29.983222   33649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:10:29.997860   33649 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:10:29.997884   33649 certs.go:194] generating shared ca certs ...
	I1026 01:10:29.997899   33649 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:29.998058   33649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:10:29.998126   33649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:10:29.998142   33649 certs.go:256] generating profile certs ...
	I1026 01:10:29.998244   33649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:10:29.998274   33649 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96
	I1026 01:10:29.998293   33649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:10:30.128480   33649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 ...
	I1026 01:10:30.128509   33649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96: {Name:mk15171cd87aebeeb1954b8b9ced93c1b8ee279d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:30.128691   33649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96 ...
	I1026 01:10:30.128706   33649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96: {Name:mk56a2c08358344f6c0e8ae27054ecf9d5383934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:30.128796   33649 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:10:30.128986   33649 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:10:30.129122   33649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:10:30.129144   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:10:30.129165   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:10:30.129182   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:10:30.129199   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:10:30.129214   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:10:30.129228   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:10:30.129243   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:10:30.129257   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:10:30.129323   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:10:30.129364   33649 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:10:30.129379   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:10:30.129414   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:10:30.129482   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:10:30.129516   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:10:30.129568   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:10:30.129608   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.129628   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.129644   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.130225   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:10:30.155172   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:10:30.177371   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:10:30.200743   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:10:30.224262   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 01:10:30.247513   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:10:30.279555   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:10:30.302523   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:10:30.324873   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:10:30.348139   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:10:30.371770   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:10:30.394434   33649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:10:30.410934   33649 ssh_runner.go:195] Run: openssl version
	I1026 01:10:30.416475   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:10:30.428798   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.432899   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.432967   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.438311   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:10:30.447111   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:10:30.456887   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.460965   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.461010   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.466238   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:10:30.474929   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:10:30.484730   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.488854   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.488902   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.494077   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:10:30.502718   33649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:10:30.506872   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:10:30.512239   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:10:30.517293   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:10:30.522424   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:10:30.527856   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:10:30.533048   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:10:30.538210   33649 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:10:30.538309   33649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:10:30.538361   33649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:10:30.572699   33649 cri.go:89] found id: "3563301f0c57cea95eece52811ebb342c4141daa311af0de187d483fc414c78b"
	I1026 01:10:30.572728   33649 cri.go:89] found id: "4bbadc1ee6738fa748337dc1739972bb5863fc5a70ad43bb158811e22ebdcc5f"
	I1026 01:10:30.572735   33649 cri.go:89] found id: "38a96f0d31c5e6dce1082a1f11e8f87dc2d7ea33057e42366a1e6e2475656626"
	I1026 01:10:30.572740   33649 cri.go:89] found id: "ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758"
	I1026 01:10:30.572744   33649 cri.go:89] found id: "56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d"
	I1026 01:10:30.572748   33649 cri.go:89] found id: "d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde"
	I1026 01:10:30.572752   33649 cri.go:89] found id: "f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa"
	I1026 01:10:30.572755   33649 cri.go:89] found id: "a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c"
	I1026 01:10:30.572758   33649 cri.go:89] found id: "47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3"
	I1026 01:10:30.572763   33649 cri.go:89] found id: "3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901"
	I1026 01:10:30.572766   33649 cri.go:89] found id: "3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b"
	I1026 01:10:30.572769   33649 cri.go:89] found id: "3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d"
	I1026 01:10:30.572771   33649 cri.go:89] found id: ""
	I1026 01:10:30.572809   33649 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (415.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-300623 stop -v=7 --alsologtostderr: exit status 82 (2m0.469318024s)

                                                
                                                
-- stdout --
	* Stopping node "ha-300623-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:14:06.692179   35717 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:14:06.692295   35717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:14:06.692304   35717 out.go:358] Setting ErrFile to fd 2...
	I1026 01:14:06.692308   35717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:14:06.692507   35717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:14:06.692747   35717 out.go:352] Setting JSON to false
	I1026 01:14:06.692817   35717 mustload.go:65] Loading cluster: ha-300623
	I1026 01:14:06.693177   35717 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:14:06.693263   35717 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:14:06.693482   35717 mustload.go:65] Loading cluster: ha-300623
	I1026 01:14:06.693624   35717 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:14:06.693649   35717 stop.go:39] StopHost: ha-300623-m04
	I1026 01:14:06.694005   35717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:14:06.694111   35717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:14:06.708814   35717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I1026 01:14:06.709364   35717 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:14:06.709991   35717 main.go:141] libmachine: Using API Version  1
	I1026 01:14:06.710015   35717 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:14:06.710337   35717 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:14:06.712976   35717 out.go:177] * Stopping node "ha-300623-m04"  ...
	I1026 01:14:06.714383   35717 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:14:06.714418   35717 main.go:141] libmachine: (ha-300623-m04) Calling .DriverName
	I1026 01:14:06.714675   35717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:14:06.714730   35717 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHHostname
	I1026 01:14:06.717873   35717 main.go:141] libmachine: (ha-300623-m04) DBG | domain ha-300623-m04 has defined MAC address 52:54:00:96:9f:e2 in network mk-ha-300623
	I1026 01:14:06.718270   35717 main.go:141] libmachine: (ha-300623-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:9f:e2", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 02:13:34 +0000 UTC Type:0 Mac:52:54:00:96:9f:e2 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-300623-m04 Clientid:01:52:54:00:96:9f:e2}
	I1026 01:14:06.718318   35717 main.go:141] libmachine: (ha-300623-m04) DBG | domain ha-300623-m04 has defined IP address 192.168.39.197 and MAC address 52:54:00:96:9f:e2 in network mk-ha-300623
	I1026 01:14:06.718572   35717 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHPort
	I1026 01:14:06.718796   35717 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHKeyPath
	I1026 01:14:06.719012   35717 main.go:141] libmachine: (ha-300623-m04) Calling .GetSSHUsername
	I1026 01:14:06.719177   35717 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623-m04/id_rsa Username:docker}
	I1026 01:14:06.804347   35717 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:14:06.857103   35717 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:14:06.909032   35717 main.go:141] libmachine: Stopping "ha-300623-m04"...
	I1026 01:14:06.909057   35717 main.go:141] libmachine: (ha-300623-m04) Calling .GetState
	I1026 01:14:06.910621   35717 main.go:141] libmachine: (ha-300623-m04) Calling .Stop
	I1026 01:14:06.914198   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 0/120
	I1026 01:14:07.916132   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 1/120
	I1026 01:14:08.917448   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 2/120
	I1026 01:14:09.918707   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 3/120
	I1026 01:14:10.919998   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 4/120
	I1026 01:14:11.921892   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 5/120
	I1026 01:14:12.923263   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 6/120
	I1026 01:14:13.924792   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 7/120
	I1026 01:14:14.926304   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 8/120
	I1026 01:14:15.927524   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 9/120
	I1026 01:14:16.929483   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 10/120
	I1026 01:14:17.931162   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 11/120
	I1026 01:14:18.932776   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 12/120
	I1026 01:14:19.934154   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 13/120
	I1026 01:14:20.935830   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 14/120
	I1026 01:14:21.937919   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 15/120
	I1026 01:14:22.940129   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 16/120
	I1026 01:14:23.941737   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 17/120
	I1026 01:14:24.943864   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 18/120
	I1026 01:14:25.945230   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 19/120
	I1026 01:14:26.947319   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 20/120
	I1026 01:14:27.948852   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 21/120
	I1026 01:14:28.950252   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 22/120
	I1026 01:14:29.951801   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 23/120
	I1026 01:14:30.953116   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 24/120
	I1026 01:14:31.955114   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 25/120
	I1026 01:14:32.956455   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 26/120
	I1026 01:14:33.958029   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 27/120
	I1026 01:14:34.960028   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 28/120
	I1026 01:14:35.961526   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 29/120
	I1026 01:14:36.963545   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 30/120
	I1026 01:14:37.964738   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 31/120
	I1026 01:14:38.965985   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 32/120
	I1026 01:14:39.967923   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 33/120
	I1026 01:14:40.969259   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 34/120
	I1026 01:14:41.971146   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 35/120
	I1026 01:14:42.972471   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 36/120
	I1026 01:14:43.974765   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 37/120
	I1026 01:14:44.976023   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 38/120
	I1026 01:14:45.977317   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 39/120
	I1026 01:14:46.979262   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 40/120
	I1026 01:14:47.980483   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 41/120
	I1026 01:14:48.981763   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 42/120
	I1026 01:14:49.983999   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 43/120
	I1026 01:14:50.985332   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 44/120
	I1026 01:14:51.987444   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 45/120
	I1026 01:14:52.988629   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 46/120
	I1026 01:14:53.989927   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 47/120
	I1026 01:14:54.991156   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 48/120
	I1026 01:14:55.992314   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 49/120
	I1026 01:14:56.994389   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 50/120
	I1026 01:14:57.996064   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 51/120
	I1026 01:14:58.997334   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 52/120
	I1026 01:14:59.998846   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 53/120
	I1026 01:15:01.000723   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 54/120
	I1026 01:15:02.002637   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 55/120
	I1026 01:15:03.003992   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 56/120
	I1026 01:15:04.005465   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 57/120
	I1026 01:15:05.006835   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 58/120
	I1026 01:15:06.008349   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 59/120
	I1026 01:15:07.010441   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 60/120
	I1026 01:15:08.011722   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 61/120
	I1026 01:15:09.013849   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 62/120
	I1026 01:15:10.015578   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 63/120
	I1026 01:15:11.016723   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 64/120
	I1026 01:15:12.018049   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 65/120
	I1026 01:15:13.019867   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 66/120
	I1026 01:15:14.021104   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 67/120
	I1026 01:15:15.023381   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 68/120
	I1026 01:15:16.024724   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 69/120
	I1026 01:15:17.026826   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 70/120
	I1026 01:15:18.028168   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 71/120
	I1026 01:15:19.029950   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 72/120
	I1026 01:15:20.031322   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 73/120
	I1026 01:15:21.032627   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 74/120
	I1026 01:15:22.034259   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 75/120
	I1026 01:15:23.035607   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 76/120
	I1026 01:15:24.037235   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 77/120
	I1026 01:15:25.038636   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 78/120
	I1026 01:15:26.040045   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 79/120
	I1026 01:15:27.042442   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 80/120
	I1026 01:15:28.043805   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 81/120
	I1026 01:15:29.045799   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 82/120
	I1026 01:15:30.047793   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 83/120
	I1026 01:15:31.049368   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 84/120
	I1026 01:15:32.051596   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 85/120
	I1026 01:15:33.052869   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 86/120
	I1026 01:15:34.054498   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 87/120
	I1026 01:15:35.055924   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 88/120
	I1026 01:15:36.057371   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 89/120
	I1026 01:15:37.059356   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 90/120
	I1026 01:15:38.061588   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 91/120
	I1026 01:15:39.063918   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 92/120
	I1026 01:15:40.065400   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 93/120
	I1026 01:15:41.066926   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 94/120
	I1026 01:15:42.068792   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 95/120
	I1026 01:15:43.071101   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 96/120
	I1026 01:15:44.072446   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 97/120
	I1026 01:15:45.074632   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 98/120
	I1026 01:15:46.075946   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 99/120
	I1026 01:15:47.077738   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 100/120
	I1026 01:15:48.079287   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 101/120
	I1026 01:15:49.080608   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 102/120
	I1026 01:15:50.081915   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 103/120
	I1026 01:15:51.083785   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 104/120
	I1026 01:15:52.085978   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 105/120
	I1026 01:15:53.087466   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 106/120
	I1026 01:15:54.089361   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 107/120
	I1026 01:15:55.090664   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 108/120
	I1026 01:15:56.092106   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 109/120
	I1026 01:15:57.094252   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 110/120
	I1026 01:15:58.095490   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 111/120
	I1026 01:15:59.096666   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 112/120
	I1026 01:16:00.097913   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 113/120
	I1026 01:16:01.099939   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 114/120
	I1026 01:16:02.101905   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 115/120
	I1026 01:16:03.103180   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 116/120
	I1026 01:16:04.104526   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 117/120
	I1026 01:16:05.105807   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 118/120
	I1026 01:16:06.107016   35717 main.go:141] libmachine: (ha-300623-m04) Waiting for machine to stop 119/120
	I1026 01:16:07.107457   35717 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 01:16:07.107504   35717 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1026 01:16:07.109591   35717 out.go:201] 
	W1026 01:16:07.110983   35717 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1026 01:16:07.110996   35717 out.go:270] * 
	* 
	W1026 01:16:07.113189   35717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:16:07.114466   35717 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-300623 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr: (19.041099003s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-300623 -n ha-300623
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 logs -n 25: (2.006911776s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m04 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp testdata/cp-test.txt                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt                       |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623 sudo cat                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623.txt                                 |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m02 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n                                                                 | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | ha-300623-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-300623 ssh -n ha-300623-m03 sudo cat                                          | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC | 26 Oct 24 01:04 UTC |
	|         | /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-300623 node stop m02 -v=7                                                     | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-300623 node start m02 -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-300623 -v=7                                                           | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-300623 -v=7                                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-300623 --wait=true -v=7                                                    | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:08 UTC | 26 Oct 24 01:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-300623                                                                | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:13 UTC |                     |
	| node    | ha-300623 node delete m03 -v=7                                                   | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:13 UTC | 26 Oct 24 01:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-300623 stop -v=7                                                              | ha-300623 | jenkins | v1.34.0 | 26 Oct 24 01:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 01:08:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:08:55.477669   33649 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:08:55.477913   33649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:08:55.477923   33649 out.go:358] Setting ErrFile to fd 2...
	I1026 01:08:55.477927   33649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:08:55.478117   33649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:08:55.478660   33649 out.go:352] Setting JSON to false
	I1026 01:08:55.479562   33649 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3075,"bootTime":1729901860,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:08:55.479629   33649 start.go:139] virtualization: kvm guest
	I1026 01:08:55.482520   33649 out.go:177] * [ha-300623] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:08:55.483678   33649 notify.go:220] Checking for updates...
	I1026 01:08:55.483703   33649 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:08:55.484974   33649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:08:55.486089   33649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:08:55.487172   33649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:08:55.488141   33649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:08:55.489202   33649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:08:55.490700   33649 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:08:55.490781   33649 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:08:55.491300   33649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:08:55.491340   33649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:08:55.506123   33649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I1026 01:08:55.506713   33649 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:08:55.507333   33649 main.go:141] libmachine: Using API Version  1
	I1026 01:08:55.507349   33649 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:08:55.507741   33649 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:08:55.507943   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.543879   33649 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 01:08:55.544920   33649 start.go:297] selected driver: kvm2
	I1026 01:08:55.544932   33649 start.go:901] validating driver "kvm2" against &{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:08:55.545078   33649 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:08:55.545380   33649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:08:55.545493   33649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:08:55.560486   33649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:08:55.561170   33649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:08:55.561203   33649 cni.go:84] Creating CNI manager for ""
	I1026 01:08:55.561261   33649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 01:08:55.561316   33649 start.go:340] cluster config:
	{Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:08:55.561484   33649 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:08:55.563355   33649 out.go:177] * Starting "ha-300623" primary control-plane node in "ha-300623" cluster
	I1026 01:08:55.564488   33649 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:08:55.564532   33649 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 01:08:55.564544   33649 cache.go:56] Caching tarball of preloaded images
	I1026 01:08:55.564614   33649 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:08:55.564627   33649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:08:55.564746   33649 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/config.json ...
	I1026 01:08:55.564974   33649 start.go:360] acquireMachinesLock for ha-300623: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:08:55.565023   33649 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "ha-300623"
	I1026 01:08:55.565042   33649 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:08:55.565054   33649 fix.go:54] fixHost starting: 
	I1026 01:08:55.565332   33649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:08:55.565365   33649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:08:55.579187   33649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I1026 01:08:55.579659   33649 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:08:55.580169   33649 main.go:141] libmachine: Using API Version  1
	I1026 01:08:55.580187   33649 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:08:55.580509   33649 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:08:55.580676   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.580824   33649 main.go:141] libmachine: (ha-300623) Calling .GetState
	I1026 01:08:55.582217   33649 fix.go:112] recreateIfNeeded on ha-300623: state=Running err=<nil>
	W1026 01:08:55.582234   33649 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:08:55.583853   33649 out.go:177] * Updating the running kvm2 "ha-300623" VM ...
	I1026 01:08:55.584977   33649 machine.go:93] provisionDockerMachine start ...
	I1026 01:08:55.584993   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:08:55.585201   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.587663   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.588091   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.588124   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.588250   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.588423   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.588568   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.588657   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.588789   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.588968   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.588979   33649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:08:55.702197   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:08:55.702224   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.702430   33649 buildroot.go:166] provisioning hostname "ha-300623"
	I1026 01:08:55.702443   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.702588   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.705133   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.705602   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.705630   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.705811   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.705993   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.706156   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.706247   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.706394   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.706618   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.706635   33649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-300623 && echo "ha-300623" | sudo tee /etc/hostname
	I1026 01:08:55.833261   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-300623
	
	I1026 01:08:55.833295   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.835711   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.836022   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.836060   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.836321   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:55.836494   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.836625   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:55.836744   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:55.836943   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:55.837111   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:55.837126   33649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-300623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-300623/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-300623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:08:55.950368   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:08:55.950413   33649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:08:55.950480   33649 buildroot.go:174] setting up certificates
	I1026 01:08:55.950495   33649 provision.go:84] configureAuth start
	I1026 01:08:55.950517   33649 main.go:141] libmachine: (ha-300623) Calling .GetMachineName
	I1026 01:08:55.950846   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:08:55.953486   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.953868   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.953900   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.954083   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:55.956415   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.956776   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:55.956804   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:55.956947   33649 provision.go:143] copyHostCerts
	I1026 01:08:55.956974   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:08:55.957018   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:08:55.957031   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:08:55.957105   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:08:55.957211   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:08:55.957231   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:08:55.957237   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:08:55.957264   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:08:55.957395   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:08:55.957438   33649 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:08:55.957446   33649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:08:55.957493   33649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:08:55.957559   33649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.ha-300623 san=[127.0.0.1 192.168.39.183 ha-300623 localhost minikube]
	I1026 01:08:56.205633   33649 provision.go:177] copyRemoteCerts
	I1026 01:08:56.205687   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:08:56.205709   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:56.208132   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.208425   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:56.208448   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.208594   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:56.208748   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.208884   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:56.209038   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:08:56.295385   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:08:56.295467   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:08:56.321337   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:08:56.321448   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:08:56.350661   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:08:56.350743   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1026 01:08:56.380006   33649 provision.go:87] duration metric: took 429.493351ms to configureAuth
	I1026 01:08:56.380034   33649 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:08:56.380236   33649 config.go:182] Loaded profile config "ha-300623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:08:56.380312   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:08:56.382788   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.383158   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:08:56.383185   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:08:56.383361   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:08:56.383538   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.383702   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:08:56.383804   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:08:56.383986   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:08:56.384179   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:08:56.384195   33649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:10:27.107496   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:10:27.107523   33649 machine.go:96] duration metric: took 1m31.522533775s to provisionDockerMachine
	I1026 01:10:27.107535   33649 start.go:293] postStartSetup for "ha-300623" (driver="kvm2")
	I1026 01:10:27.107547   33649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:10:27.107568   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.107919   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:10:27.107949   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.110959   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.111308   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.111332   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.111495   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.111686   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.111857   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.111983   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.201316   33649 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:10:27.205477   33649 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:10:27.205508   33649 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:10:27.205588   33649 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:10:27.205706   33649 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:10:27.205719   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:10:27.205839   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:10:27.214970   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:10:27.237645   33649 start.go:296] duration metric: took 130.093775ms for postStartSetup
	I1026 01:10:27.237689   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.237955   33649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1026 01:10:27.237977   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.240769   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.241283   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.241311   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.241496   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.241694   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.241844   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.241961   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	W1026 01:10:27.328117   33649 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1026 01:10:27.328142   33649 fix.go:56] duration metric: took 1m31.763089862s for fixHost
	I1026 01:10:27.328166   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.330818   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.331182   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.331210   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.331317   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.331494   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.331628   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.331737   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.331860   33649 main.go:141] libmachine: Using SSH client type: native
	I1026 01:10:27.332049   33649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1026 01:10:27.332063   33649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:10:27.441936   33649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729905027.399947841
	
	I1026 01:10:27.441956   33649 fix.go:216] guest clock: 1729905027.399947841
	I1026 01:10:27.441968   33649 fix.go:229] Guest: 2024-10-26 01:10:27.399947841 +0000 UTC Remote: 2024-10-26 01:10:27.328149088 +0000 UTC m=+91.889873341 (delta=71.798753ms)
	I1026 01:10:27.442007   33649 fix.go:200] guest clock delta is within tolerance: 71.798753ms
	I1026 01:10:27.442013   33649 start.go:83] releasing machines lock for "ha-300623", held for 1m31.87697823s
	I1026 01:10:27.442029   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.442284   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:10:27.444841   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.445176   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.445196   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.445384   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.445870   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.446021   33649 main.go:141] libmachine: (ha-300623) Calling .DriverName
	I1026 01:10:27.446086   33649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:10:27.446139   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.446191   33649 ssh_runner.go:195] Run: cat /version.json
	I1026 01:10:27.446215   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHHostname
	I1026 01:10:27.448636   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.448934   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.448960   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449009   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449138   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.449305   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.449398   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:27.449444   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:27.449448   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.449606   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHPort
	I1026 01:10:27.449599   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.449784   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHKeyPath
	I1026 01:10:27.449963   33649 main.go:141] libmachine: (ha-300623) Calling .GetSSHUsername
	I1026 01:10:27.450094   33649 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/ha-300623/id_rsa Username:docker}
	I1026 01:10:27.530809   33649 ssh_runner.go:195] Run: systemctl --version
	I1026 01:10:27.556941   33649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:10:27.716547   33649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:10:27.725391   33649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:10:27.725496   33649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:10:27.734551   33649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:10:27.734575   33649 start.go:495] detecting cgroup driver to use...
	I1026 01:10:27.734653   33649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:10:27.750749   33649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:10:27.765207   33649 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:10:27.765266   33649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:10:27.779013   33649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:10:27.792254   33649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:10:27.943873   33649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:10:28.086730   33649 docker.go:233] disabling docker service ...
	I1026 01:10:28.086810   33649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:10:28.103469   33649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:10:28.116868   33649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:10:28.291455   33649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:10:28.452338   33649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:10:28.466471   33649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:10:28.485805   33649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:10:28.485869   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.496181   33649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:10:28.496248   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.506301   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.516784   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.527104   33649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:10:28.537732   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.548364   33649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.559572   33649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:10:28.569724   33649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:10:28.579314   33649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:10:28.589200   33649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:10:28.738121   33649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:10:29.494343   33649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:10:29.494419   33649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:10:29.500059   33649 start.go:563] Will wait 60s for crictl version
	I1026 01:10:29.500121   33649 ssh_runner.go:195] Run: which crictl
	I1026 01:10:29.503676   33649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:10:29.540001   33649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:10:29.540090   33649 ssh_runner.go:195] Run: crio --version
	I1026 01:10:29.567687   33649 ssh_runner.go:195] Run: crio --version
	I1026 01:10:29.597322   33649 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:10:29.598784   33649 main.go:141] libmachine: (ha-300623) Calling .GetIP
	I1026 01:10:29.601552   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:29.602219   33649 main.go:141] libmachine: (ha-300623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:a0:46", ip: ""} in network mk-ha-300623: {Iface:virbr1 ExpiryTime:2024-10-26 01:59:55 +0000 UTC Type:0 Mac:52:54:00:4d:a0:46 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-300623 Clientid:01:52:54:00:4d:a0:46}
	I1026 01:10:29.602241   33649 main.go:141] libmachine: (ha-300623) DBG | domain ha-300623 has defined IP address 192.168.39.183 and MAC address 52:54:00:4d:a0:46 in network mk-ha-300623
	I1026 01:10:29.602529   33649 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:10:29.607026   33649 kubeadm.go:883] updating cluster {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:10:29.607152   33649 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:10:29.607195   33649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:10:29.651349   33649 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:10:29.651378   33649 crio.go:433] Images already preloaded, skipping extraction
	I1026 01:10:29.651446   33649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:10:29.682582   33649 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:10:29.682604   33649 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:10:29.682616   33649 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.31.2 crio true true} ...
	I1026 01:10:29.682754   33649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-300623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:10:29.682840   33649 ssh_runner.go:195] Run: crio config
	I1026 01:10:29.731311   33649 cni.go:84] Creating CNI manager for ""
	I1026 01:10:29.731333   33649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1026 01:10:29.731343   33649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:10:29.731376   33649 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-300623 NodeName:ha-300623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:10:29.731524   33649 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-300623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:10:29.731546   33649 kube-vip.go:115] generating kube-vip config ...
	I1026 01:10:29.731585   33649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1026 01:10:29.742547   33649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1026 01:10:29.742662   33649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1026 01:10:29.742725   33649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:10:29.752124   33649 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:10:29.752208   33649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1026 01:10:29.761412   33649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1026 01:10:29.777702   33649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:10:29.793807   33649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1026 01:10:29.810417   33649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1026 01:10:29.827763   33649 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1026 01:10:29.832629   33649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:10:29.983222   33649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:10:29.997860   33649 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623 for IP: 192.168.39.183
	I1026 01:10:29.997884   33649 certs.go:194] generating shared ca certs ...
	I1026 01:10:29.997899   33649 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:29.998058   33649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:10:29.998126   33649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:10:29.998142   33649 certs.go:256] generating profile certs ...
	I1026 01:10:29.998244   33649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/client.key
	I1026 01:10:29.998274   33649 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96
	I1026 01:10:29.998293   33649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183 192.168.39.62 192.168.39.180 192.168.39.254]
	I1026 01:10:30.128480   33649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 ...
	I1026 01:10:30.128509   33649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96: {Name:mk15171cd87aebeeb1954b8b9ced93c1b8ee279d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:30.128691   33649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96 ...
	I1026 01:10:30.128706   33649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96: {Name:mk56a2c08358344f6c0e8ae27054ecf9d5383934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:10:30.128796   33649 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt.b49fea96 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt
	I1026 01:10:30.128986   33649 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key.b49fea96 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key
	I1026 01:10:30.129122   33649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key
	I1026 01:10:30.129144   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:10:30.129165   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:10:30.129182   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:10:30.129199   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:10:30.129214   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:10:30.129228   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:10:30.129243   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:10:30.129257   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:10:30.129323   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:10:30.129364   33649 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:10:30.129379   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:10:30.129414   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:10:30.129482   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:10:30.129516   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:10:30.129568   33649 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:10:30.129608   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.129628   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.129644   33649 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.130225   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:10:30.155172   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:10:30.177371   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:10:30.200743   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:10:30.224262   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 01:10:30.247513   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:10:30.279555   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:10:30.302523   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/ha-300623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:10:30.324873   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:10:30.348139   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:10:30.371770   33649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:10:30.394434   33649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:10:30.410934   33649 ssh_runner.go:195] Run: openssl version
	I1026 01:10:30.416475   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:10:30.428798   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.432899   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.432967   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:10:30.438311   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:10:30.447111   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:10:30.456887   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.460965   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.461010   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:10:30.466238   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:10:30.474929   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:10:30.484730   33649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.488854   33649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.488902   33649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:10:30.494077   33649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:10:30.502718   33649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:10:30.506872   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:10:30.512239   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:10:30.517293   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:10:30.522424   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:10:30.527856   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:10:30.533048   33649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:10:30.538210   33649 kubeadm.go:392] StartCluster: {Name:ha-300623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-300623 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.197 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:10:30.538309   33649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:10:30.538361   33649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:10:30.572699   33649 cri.go:89] found id: "3563301f0c57cea95eece52811ebb342c4141daa311af0de187d483fc414c78b"
	I1026 01:10:30.572728   33649 cri.go:89] found id: "4bbadc1ee6738fa748337dc1739972bb5863fc5a70ad43bb158811e22ebdcc5f"
	I1026 01:10:30.572735   33649 cri.go:89] found id: "38a96f0d31c5e6dce1082a1f11e8f87dc2d7ea33057e42366a1e6e2475656626"
	I1026 01:10:30.572740   33649 cri.go:89] found id: "ca2bd9d7fe0a2f6971a71c05272fcc21515b2ff600dc8b820061b3d468158758"
	I1026 01:10:30.572744   33649 cri.go:89] found id: "56c849c3f6d25cfd647fad7d43444c6cad847ab55e2b8906e43ddc2516f02e9d"
	I1026 01:10:30.572748   33649 cri.go:89] found id: "d6d0d55128c158e2ca6fed28bf87392c84967498ba96ba219ca526f2a7626bde"
	I1026 01:10:30.572752   33649 cri.go:89] found id: "f7fca08cb5de6eaebc97344fca0b32833d7ce0f79824a880ef167e9cf26423fa"
	I1026 01:10:30.572755   33649 cri.go:89] found id: "a103c720401684df743d1bbe2dbfbef3c4fd3b215d40b954bc8d32999843323c"
	I1026 01:10:30.572758   33649 cri.go:89] found id: "47a0b2ec9c50d98cf61afcede9ca59f3922eca5f3342d28ad9687316e5046ba3"
	I1026 01:10:30.572763   33649 cri.go:89] found id: "3e321e090fa4b28d01db04f1450f5447a294cbba7c255f31043ed6ec327b9901"
	I1026 01:10:30.572766   33649 cri.go:89] found id: "3c25e47b58ddc55a9f9e6223a0e575d02bef4b992ec7d449ac4d2678adc1b42b"
	I1026 01:10:30.572769   33649 cri.go:89] found id: "3bcea9b84ac3779215d937a5f2e1fa0f31a346a84e6373ab94522b972658773d"
	I1026 01:10:30.572771   33649 cri.go:89] found id: ""
	I1026 01:10:30.572809   33649 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-300623 -n ha-300623
helpers_test.go:261: (dbg) Run:  kubectl --context ha-300623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328488
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-328488
E1026 01:31:37.285975   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-328488: exit status 82 (2m1.785784465s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328488-m03"  ...
	* Stopping node "multinode-328488-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-328488" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328488 --wait=true -v=8 --alsologtostderr
E1026 01:33:52.966181   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:36:37.284590   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328488 --wait=true -v=8 --alsologtostderr: (3m20.84835534s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328488
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328488 -n multinode-328488
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 logs -n 25: (1.999397575s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488:/home/docker/cp-test_multinode-328488-m02_multinode-328488.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488 sudo cat                                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m02_multinode-328488.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03:/home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488-m03 sudo cat                                   | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp testdata/cp-test.txt                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488:/home/docker/cp-test_multinode-328488-m03_multinode-328488.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488 sudo cat                                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m03_multinode-328488.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02:/home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488-m02 sudo cat                                   | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-328488 node stop m03                                                          | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	| node    | multinode-328488 node start                                                             | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-328488                                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC |                     |
	| stop    | -p multinode-328488                                                                     | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC |                     |
	| start   | -p multinode-328488                                                                     | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:33 UTC | 26 Oct 24 01:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-328488                                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 01:33:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:33:17.413842   46163 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:33:17.413939   46163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:33:17.413944   46163 out.go:358] Setting ErrFile to fd 2...
	I1026 01:33:17.413948   46163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:33:17.414151   46163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:33:17.414665   46163 out.go:352] Setting JSON to false
	I1026 01:33:17.415498   46163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4537,"bootTime":1729901860,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:33:17.415592   46163 start.go:139] virtualization: kvm guest
	I1026 01:33:17.417552   46163 out.go:177] * [multinode-328488] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:33:17.418729   46163 notify.go:220] Checking for updates...
	I1026 01:33:17.418738   46163 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:33:17.419984   46163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:33:17.421182   46163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:33:17.422357   46163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:33:17.423369   46163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:33:17.424570   46163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:33:17.425993   46163 config.go:182] Loaded profile config "multinode-328488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:33:17.426073   46163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:33:17.426507   46163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:33:17.426545   46163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:33:17.441351   46163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I1026 01:33:17.441872   46163 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:33:17.442444   46163 main.go:141] libmachine: Using API Version  1
	I1026 01:33:17.442466   46163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:33:17.442808   46163 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:33:17.442996   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.477474   46163 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 01:33:17.478652   46163 start.go:297] selected driver: kvm2
	I1026 01:33:17.478666   46163 start.go:901] validating driver "kvm2" against &{Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:33:17.478811   46163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:33:17.479113   46163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:33:17.479196   46163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:33:17.494062   46163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:33:17.494867   46163 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:33:17.494924   46163 cni.go:84] Creating CNI manager for ""
	I1026 01:33:17.494991   46163 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1026 01:33:17.495070   46163 start.go:340] cluster config:
	{Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:33:17.495234   46163 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:33:17.497245   46163 out.go:177] * Starting "multinode-328488" primary control-plane node in "multinode-328488" cluster
	I1026 01:33:17.498515   46163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:33:17.498562   46163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 01:33:17.498572   46163 cache.go:56] Caching tarball of preloaded images
	I1026 01:33:17.498675   46163 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:33:17.498689   46163 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:33:17.498797   46163 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/config.json ...
	I1026 01:33:17.499002   46163 start.go:360] acquireMachinesLock for multinode-328488: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:33:17.499045   46163 start.go:364] duration metric: took 23.997µs to acquireMachinesLock for "multinode-328488"
	I1026 01:33:17.499064   46163 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:33:17.499073   46163 fix.go:54] fixHost starting: 
	I1026 01:33:17.499325   46163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:33:17.499361   46163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:33:17.513842   46163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1026 01:33:17.514300   46163 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:33:17.514862   46163 main.go:141] libmachine: Using API Version  1
	I1026 01:33:17.514884   46163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:33:17.515224   46163 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:33:17.515418   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.515567   46163 main.go:141] libmachine: (multinode-328488) Calling .GetState
	I1026 01:33:17.517117   46163 fix.go:112] recreateIfNeeded on multinode-328488: state=Running err=<nil>
	W1026 01:33:17.517140   46163 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:33:17.519133   46163 out.go:177] * Updating the running kvm2 "multinode-328488" VM ...
	I1026 01:33:17.520376   46163 machine.go:93] provisionDockerMachine start ...
	I1026 01:33:17.520395   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.520611   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.522975   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.523399   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.523431   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.523545   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.523744   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.523890   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.524026   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.524143   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.524325   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.524336   46163 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:33:17.642289   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328488
	
	I1026 01:33:17.642318   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.642538   46163 buildroot.go:166] provisioning hostname "multinode-328488"
	I1026 01:33:17.642561   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.642711   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.645380   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.645846   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.645872   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.646008   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.646169   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.646295   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.646414   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.646576   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.646784   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.646796   46163 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328488 && echo "multinode-328488" | sudo tee /etc/hostname
	I1026 01:33:17.775394   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328488
	
	I1026 01:33:17.775421   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.778193   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.778555   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.778590   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.778718   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.778916   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.779047   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.779170   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.779328   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.779495   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.779512   46163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328488/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:33:17.889827   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:33:17.889858   46163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:33:17.889879   46163 buildroot.go:174] setting up certificates
	I1026 01:33:17.889898   46163 provision.go:84] configureAuth start
	I1026 01:33:17.889912   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.890200   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:33:17.892550   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.892917   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.892946   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.893099   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.895364   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.895639   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.895666   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.895782   46163 provision.go:143] copyHostCerts
	I1026 01:33:17.895810   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:33:17.895850   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:33:17.895861   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:33:17.895945   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:33:17.896041   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:33:17.896067   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:33:17.896076   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:33:17.896113   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:33:17.896173   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:33:17.896195   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:33:17.896202   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:33:17.896233   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:33:17.896302   46163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.multinode-328488 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-328488]
	I1026 01:33:18.046434   46163 provision.go:177] copyRemoteCerts
	I1026 01:33:18.046487   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:33:18.046509   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:18.049183   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.049535   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:18.049563   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.049737   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:18.049885   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.050052   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:18.050135   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:33:18.136102   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:33:18.136162   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:33:18.161196   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:33:18.161252   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1026 01:33:18.185805   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:33:18.185883   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:33:18.210153   46163 provision.go:87] duration metric: took 320.240077ms to configureAuth
	I1026 01:33:18.210191   46163 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:33:18.210433   46163 config.go:182] Loaded profile config "multinode-328488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:33:18.210500   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:18.213140   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.213644   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:18.213689   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.213937   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:18.214109   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.214250   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.214373   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:18.214560   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:18.214737   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:18.214755   46163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:34:49.032290   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:34:49.032336   46163 machine.go:96] duration metric: took 1m31.511932811s to provisionDockerMachine
	I1026 01:34:49.032354   46163 start.go:293] postStartSetup for "multinode-328488" (driver="kvm2")
	I1026 01:34:49.032370   46163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:34:49.032397   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.032710   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:34:49.032745   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.036094   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.036564   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.036596   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.036745   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.036950   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.037090   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.037204   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.124246   46163 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:34:49.128168   46163 command_runner.go:130] > NAME=Buildroot
	I1026 01:34:49.128188   46163 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1026 01:34:49.128194   46163 command_runner.go:130] > ID=buildroot
	I1026 01:34:49.128202   46163 command_runner.go:130] > VERSION_ID=2023.02.9
	I1026 01:34:49.128210   46163 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1026 01:34:49.128300   46163 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:34:49.128322   46163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:34:49.128394   46163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:34:49.128485   46163 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:34:49.128496   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:34:49.128617   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:34:49.137348   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:34:49.159468   46163 start.go:296] duration metric: took 127.100086ms for postStartSetup
	I1026 01:34:49.159508   46163 fix.go:56] duration metric: took 1m31.660434402s for fixHost
	I1026 01:34:49.159531   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.162346   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.162710   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.162732   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.162913   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.163084   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.163220   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.163324   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.163471   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:34:49.163635   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:34:49.163646   46163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:34:49.273919   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729906489.248573908
	
	I1026 01:34:49.273944   46163 fix.go:216] guest clock: 1729906489.248573908
	I1026 01:34:49.273951   46163 fix.go:229] Guest: 2024-10-26 01:34:49.248573908 +0000 UTC Remote: 2024-10-26 01:34:49.159513005 +0000 UTC m=+91.782993940 (delta=89.060903ms)
	I1026 01:34:49.273995   46163 fix.go:200] guest clock delta is within tolerance: 89.060903ms
	I1026 01:34:49.274001   46163 start.go:83] releasing machines lock for "multinode-328488", held for 1m31.774945295s
	I1026 01:34:49.274018   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.274252   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:34:49.276716   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.277062   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.277090   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.277230   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.277751   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.277909   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.278013   46163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:34:49.278057   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.278114   46163 ssh_runner.go:195] Run: cat /version.json
	I1026 01:34:49.278140   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.280630   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.280896   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281011   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.281044   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281183   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.281352   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.281374   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.281375   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281532   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.281544   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.281711   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.281703   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.281824   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.281950   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.394245   46163 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1026 01:34:49.394293   46163 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1026 01:34:49.394455   46163 ssh_runner.go:195] Run: systemctl --version
	I1026 01:34:49.400364   46163 command_runner.go:130] > systemd 252 (252)
	I1026 01:34:49.400408   46163 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1026 01:34:49.400476   46163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:34:49.557526   46163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:34:49.565251   46163 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1026 01:34:49.565313   46163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:34:49.565361   46163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:34:49.574827   46163 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:34:49.574856   46163 start.go:495] detecting cgroup driver to use...
	I1026 01:34:49.574923   46163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:34:49.591196   46163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:34:49.605179   46163 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:34:49.605239   46163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:34:49.618964   46163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:34:49.633086   46163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:34:49.781533   46163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:34:49.938597   46163 docker.go:233] disabling docker service ...
	I1026 01:34:49.938676   46163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:34:49.957306   46163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:34:49.970947   46163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:34:50.108603   46163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:34:50.244548   46163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:34:50.258723   46163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:34:50.275991   46163 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1026 01:34:50.276240   46163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:34:50.276323   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.287767   46163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:34:50.287859   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.298540   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.309248   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.320754   46163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:34:50.332192   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.343125   46163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.354191   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.365288   46163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:34:50.375341   46163 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1026 01:34:50.375434   46163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:34:50.385434   46163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:34:50.519806   46163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:34:50.711608   46163 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:34:50.711677   46163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:34:50.716212   46163 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1026 01:34:50.716228   46163 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1026 01:34:50.716234   46163 command_runner.go:130] > Device: 0,22	Inode: 1281        Links: 1
	I1026 01:34:50.716243   46163 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:34:50.716248   46163 command_runner.go:130] > Access: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716253   46163 command_runner.go:130] > Modify: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716260   46163 command_runner.go:130] > Change: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716265   46163 command_runner.go:130] >  Birth: -
	I1026 01:34:50.716378   46163 start.go:563] Will wait 60s for crictl version
	I1026 01:34:50.716441   46163 ssh_runner.go:195] Run: which crictl
	I1026 01:34:50.719880   46163 command_runner.go:130] > /usr/bin/crictl
	I1026 01:34:50.719959   46163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:34:50.758298   46163 command_runner.go:130] > Version:  0.1.0
	I1026 01:34:50.758322   46163 command_runner.go:130] > RuntimeName:  cri-o
	I1026 01:34:50.758327   46163 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1026 01:34:50.758333   46163 command_runner.go:130] > RuntimeApiVersion:  v1
	I1026 01:34:50.758398   46163 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:34:50.758499   46163 ssh_runner.go:195] Run: crio --version
	I1026 01:34:50.786260   46163 command_runner.go:130] > crio version 1.29.1
	I1026 01:34:50.786289   46163 command_runner.go:130] > Version:        1.29.1
	I1026 01:34:50.786298   46163 command_runner.go:130] > GitCommit:      unknown
	I1026 01:34:50.786325   46163 command_runner.go:130] > GitCommitDate:  unknown
	I1026 01:34:50.786332   46163 command_runner.go:130] > GitTreeState:   clean
	I1026 01:34:50.786341   46163 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1026 01:34:50.786347   46163 command_runner.go:130] > GoVersion:      go1.21.6
	I1026 01:34:50.786355   46163 command_runner.go:130] > Compiler:       gc
	I1026 01:34:50.786362   46163 command_runner.go:130] > Platform:       linux/amd64
	I1026 01:34:50.786369   46163 command_runner.go:130] > Linkmode:       dynamic
	I1026 01:34:50.786377   46163 command_runner.go:130] > BuildTags:      
	I1026 01:34:50.786386   46163 command_runner.go:130] >   containers_image_ostree_stub
	I1026 01:34:50.786393   46163 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1026 01:34:50.786403   46163 command_runner.go:130] >   btrfs_noversion
	I1026 01:34:50.786411   46163 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1026 01:34:50.786417   46163 command_runner.go:130] >   libdm_no_deferred_remove
	I1026 01:34:50.786424   46163 command_runner.go:130] >   seccomp
	I1026 01:34:50.786432   46163 command_runner.go:130] > LDFlags:          unknown
	I1026 01:34:50.786439   46163 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:34:50.786447   46163 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:34:50.787722   46163 ssh_runner.go:195] Run: crio --version
	I1026 01:34:50.815858   46163 command_runner.go:130] > crio version 1.29.1
	I1026 01:34:50.815899   46163 command_runner.go:130] > Version:        1.29.1
	I1026 01:34:50.815905   46163 command_runner.go:130] > GitCommit:      unknown
	I1026 01:34:50.815909   46163 command_runner.go:130] > GitCommitDate:  unknown
	I1026 01:34:50.815913   46163 command_runner.go:130] > GitTreeState:   clean
	I1026 01:34:50.815918   46163 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1026 01:34:50.815922   46163 command_runner.go:130] > GoVersion:      go1.21.6
	I1026 01:34:50.815926   46163 command_runner.go:130] > Compiler:       gc
	I1026 01:34:50.815930   46163 command_runner.go:130] > Platform:       linux/amd64
	I1026 01:34:50.815934   46163 command_runner.go:130] > Linkmode:       dynamic
	I1026 01:34:50.815942   46163 command_runner.go:130] > BuildTags:      
	I1026 01:34:50.815948   46163 command_runner.go:130] >   containers_image_ostree_stub
	I1026 01:34:50.815952   46163 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1026 01:34:50.815960   46163 command_runner.go:130] >   btrfs_noversion
	I1026 01:34:50.815964   46163 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1026 01:34:50.815967   46163 command_runner.go:130] >   libdm_no_deferred_remove
	I1026 01:34:50.815971   46163 command_runner.go:130] >   seccomp
	I1026 01:34:50.815978   46163 command_runner.go:130] > LDFlags:          unknown
	I1026 01:34:50.815981   46163 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:34:50.815986   46163 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:34:50.819370   46163 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:34:50.820742   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:34:50.823538   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:50.823872   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:50.823902   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:50.824138   46163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:34:50.828384   46163 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1026 01:34:50.828519   46163 kubeadm.go:883] updating cluster {Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:34:50.828699   46163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:34:50.828758   46163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:34:50.870101   46163 command_runner.go:130] > {
	I1026 01:34:50.870128   46163 command_runner.go:130] >   "images": [
	I1026 01:34:50.870134   46163 command_runner.go:130] >     {
	I1026 01:34:50.870144   46163 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1026 01:34:50.870150   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870158   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1026 01:34:50.870164   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870170   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870183   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1026 01:34:50.870196   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1026 01:34:50.870217   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870229   46163 command_runner.go:130] >       "size": "94965812",
	I1026 01:34:50.870238   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870247   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870258   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870268   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870274   46163 command_runner.go:130] >     },
	I1026 01:34:50.870282   46163 command_runner.go:130] >     {
	I1026 01:34:50.870293   46163 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1026 01:34:50.870303   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870313   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1026 01:34:50.870322   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870332   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870347   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1026 01:34:50.870361   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1026 01:34:50.870374   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870384   46163 command_runner.go:130] >       "size": "1363676",
	I1026 01:34:50.870392   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870410   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870419   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870426   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870436   46163 command_runner.go:130] >     },
	I1026 01:34:50.870444   46163 command_runner.go:130] >     {
	I1026 01:34:50.870455   46163 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:34:50.870466   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870478   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:34:50.870487   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870495   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870511   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:34:50.870527   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:34:50.870535   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870543   46163 command_runner.go:130] >       "size": "31470524",
	I1026 01:34:50.870553   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870568   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870578   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870585   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870593   46163 command_runner.go:130] >     },
	I1026 01:34:50.870600   46163 command_runner.go:130] >     {
	I1026 01:34:50.870613   46163 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1026 01:34:50.870622   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870631   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1026 01:34:50.870639   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870647   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870663   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1026 01:34:50.870692   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1026 01:34:50.870702   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870708   46163 command_runner.go:130] >       "size": "63273227",
	I1026 01:34:50.870715   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870722   46163 command_runner.go:130] >       "username": "nonroot",
	I1026 01:34:50.870732   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870740   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870748   46163 command_runner.go:130] >     },
	I1026 01:34:50.870754   46163 command_runner.go:130] >     {
	I1026 01:34:50.870767   46163 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1026 01:34:50.870774   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870782   46163 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1026 01:34:50.870789   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870799   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870814   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1026 01:34:50.870828   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1026 01:34:50.870836   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870844   46163 command_runner.go:130] >       "size": "149009664",
	I1026 01:34:50.870854   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.870863   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.870871   46163 command_runner.go:130] >       },
	I1026 01:34:50.870879   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870895   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870906   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870913   46163 command_runner.go:130] >     },
	I1026 01:34:50.870920   46163 command_runner.go:130] >     {
	I1026 01:34:50.870932   46163 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1026 01:34:50.870940   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870951   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1026 01:34:50.870960   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870968   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870984   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1026 01:34:50.870999   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1026 01:34:50.871007   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871015   46163 command_runner.go:130] >       "size": "95274464",
	I1026 01:34:50.871031   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.871041   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.871047   46163 command_runner.go:130] >       },
	I1026 01:34:50.871054   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871064   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.871072   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.871081   46163 command_runner.go:130] >     },
	I1026 01:34:50.871088   46163 command_runner.go:130] >     {
	I1026 01:34:50.871101   46163 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1026 01:34:50.871111   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.871122   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1026 01:34:50.871131   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871138   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.871154   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1026 01:34:50.871170   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1026 01:34:50.871180   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871188   46163 command_runner.go:130] >       "size": "89474374",
	I1026 01:34:50.871198   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.871207   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.871214   46163 command_runner.go:130] >       },
	I1026 01:34:50.871235   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871245   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.871252   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.871259   46163 command_runner.go:130] >     },
	I1026 01:34:50.871266   46163 command_runner.go:130] >     {
	I1026 01:34:50.871279   46163 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1026 01:34:50.871289   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.871298   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1026 01:34:50.871307   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871315   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.871741   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1026 01:34:50.871802   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1026 01:34:50.871818   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871834   46163 command_runner.go:130] >       "size": "92783513",
	I1026 01:34:50.871856   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.871871   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871994   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872012   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.872027   46163 command_runner.go:130] >     },
	I1026 01:34:50.872040   46163 command_runner.go:130] >     {
	I1026 01:34:50.872066   46163 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1026 01:34:50.872080   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.872097   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1026 01:34:50.872112   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872127   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.872153   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1026 01:34:50.872173   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1026 01:34:50.872193   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872207   46163 command_runner.go:130] >       "size": "68457798",
	I1026 01:34:50.872221   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.872236   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.872250   46163 command_runner.go:130] >       },
	I1026 01:34:50.872265   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.872305   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872320   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.872334   46163 command_runner.go:130] >     },
	I1026 01:34:50.872348   46163 command_runner.go:130] >     {
	I1026 01:34:50.872364   46163 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1026 01:34:50.872379   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.872400   46163 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1026 01:34:50.872413   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872428   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.872447   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1026 01:34:50.872472   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1026 01:34:50.872486   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872501   46163 command_runner.go:130] >       "size": "742080",
	I1026 01:34:50.872515   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.872535   46163 command_runner.go:130] >         "value": "65535"
	I1026 01:34:50.872550   46163 command_runner.go:130] >       },
	I1026 01:34:50.872564   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.872578   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872592   46163 command_runner.go:130] >       "pinned": true
	I1026 01:34:50.872624   46163 command_runner.go:130] >     }
	I1026 01:34:50.872635   46163 command_runner.go:130] >   ]
	I1026 01:34:50.872673   46163 command_runner.go:130] > }
	I1026 01:34:50.873301   46163 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:34:50.873317   46163 crio.go:433] Images already preloaded, skipping extraction
	I1026 01:34:50.873361   46163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:34:50.905504   46163 command_runner.go:130] > {
	I1026 01:34:50.905536   46163 command_runner.go:130] >   "images": [
	I1026 01:34:50.905543   46163 command_runner.go:130] >     {
	I1026 01:34:50.905551   46163 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1026 01:34:50.905557   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905563   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1026 01:34:50.905567   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905571   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905581   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1026 01:34:50.905589   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1026 01:34:50.905592   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905598   46163 command_runner.go:130] >       "size": "94965812",
	I1026 01:34:50.905602   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905609   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905616   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905620   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905626   46163 command_runner.go:130] >     },
	I1026 01:34:50.905629   46163 command_runner.go:130] >     {
	I1026 01:34:50.905635   46163 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1026 01:34:50.905639   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905648   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1026 01:34:50.905652   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905658   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905664   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1026 01:34:50.905674   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1026 01:34:50.905677   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905681   46163 command_runner.go:130] >       "size": "1363676",
	I1026 01:34:50.905691   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905700   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905704   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905708   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905711   46163 command_runner.go:130] >     },
	I1026 01:34:50.905715   46163 command_runner.go:130] >     {
	I1026 01:34:50.905721   46163 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:34:50.905726   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905731   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:34:50.905734   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905744   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905754   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:34:50.905761   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:34:50.905767   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905771   46163 command_runner.go:130] >       "size": "31470524",
	I1026 01:34:50.905775   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905780   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905786   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905789   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905793   46163 command_runner.go:130] >     },
	I1026 01:34:50.905798   46163 command_runner.go:130] >     {
	I1026 01:34:50.905804   46163 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1026 01:34:50.905811   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905816   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1026 01:34:50.905820   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905823   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905832   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1026 01:34:50.905843   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1026 01:34:50.905849   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905853   46163 command_runner.go:130] >       "size": "63273227",
	I1026 01:34:50.905857   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905863   46163 command_runner.go:130] >       "username": "nonroot",
	I1026 01:34:50.905870   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905873   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905877   46163 command_runner.go:130] >     },
	I1026 01:34:50.905881   46163 command_runner.go:130] >     {
	I1026 01:34:50.905887   46163 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1026 01:34:50.905898   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905903   46163 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1026 01:34:50.905909   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905913   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905919   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1026 01:34:50.905926   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1026 01:34:50.905937   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905943   46163 command_runner.go:130] >       "size": "149009664",
	I1026 01:34:50.905947   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.905953   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.905957   46163 command_runner.go:130] >       },
	I1026 01:34:50.905960   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905964   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905968   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905972   46163 command_runner.go:130] >     },
	I1026 01:34:50.905975   46163 command_runner.go:130] >     {
	I1026 01:34:50.905987   46163 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1026 01:34:50.905993   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906001   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1026 01:34:50.906006   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906011   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906026   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1026 01:34:50.906036   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1026 01:34:50.906041   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906046   46163 command_runner.go:130] >       "size": "95274464",
	I1026 01:34:50.906052   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906058   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906063   46163 command_runner.go:130] >       },
	I1026 01:34:50.906070   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906078   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906086   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906092   46163 command_runner.go:130] >     },
	I1026 01:34:50.906101   46163 command_runner.go:130] >     {
	I1026 01:34:50.906112   46163 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1026 01:34:50.906121   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906130   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1026 01:34:50.906137   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906147   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906160   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1026 01:34:50.906183   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1026 01:34:50.906194   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906201   46163 command_runner.go:130] >       "size": "89474374",
	I1026 01:34:50.906208   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906217   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906225   46163 command_runner.go:130] >       },
	I1026 01:34:50.906235   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906242   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906249   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906258   46163 command_runner.go:130] >     },
	I1026 01:34:50.906265   46163 command_runner.go:130] >     {
	I1026 01:34:50.906278   46163 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1026 01:34:50.906288   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906300   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1026 01:34:50.906307   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906316   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906347   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1026 01:34:50.906361   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1026 01:34:50.906368   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906378   46163 command_runner.go:130] >       "size": "92783513",
	I1026 01:34:50.906388   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.906395   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906405   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906414   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906421   46163 command_runner.go:130] >     },
	I1026 01:34:50.906428   46163 command_runner.go:130] >     {
	I1026 01:34:50.906441   46163 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1026 01:34:50.906451   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906463   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1026 01:34:50.906471   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906478   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906494   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1026 01:34:50.906510   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1026 01:34:50.906527   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906537   46163 command_runner.go:130] >       "size": "68457798",
	I1026 01:34:50.906546   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906554   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906562   46163 command_runner.go:130] >       },
	I1026 01:34:50.906570   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906579   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906588   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906596   46163 command_runner.go:130] >     },
	I1026 01:34:50.906603   46163 command_runner.go:130] >     {
	I1026 01:34:50.906616   46163 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1026 01:34:50.906626   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906636   46163 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1026 01:34:50.906641   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906647   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906659   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1026 01:34:50.906676   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1026 01:34:50.906685   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906693   46163 command_runner.go:130] >       "size": "742080",
	I1026 01:34:50.906701   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906708   46163 command_runner.go:130] >         "value": "65535"
	I1026 01:34:50.906717   46163 command_runner.go:130] >       },
	I1026 01:34:50.906725   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906740   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906747   46163 command_runner.go:130] >       "pinned": true
	I1026 01:34:50.906756   46163 command_runner.go:130] >     }
	I1026 01:34:50.906762   46163 command_runner.go:130] >   ]
	I1026 01:34:50.906770   46163 command_runner.go:130] > }
	I1026 01:34:50.906901   46163 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:34:50.906913   46163 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:34:50.906921   46163 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.2 crio true true} ...
	I1026 01:34:50.907034   46163 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-328488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:34:50.907118   46163 ssh_runner.go:195] Run: crio config
	I1026 01:34:50.952629   46163 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1026 01:34:50.952658   46163 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1026 01:34:50.952668   46163 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1026 01:34:50.952673   46163 command_runner.go:130] > #
	I1026 01:34:50.952684   46163 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1026 01:34:50.952692   46163 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1026 01:34:50.952700   46163 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1026 01:34:50.952718   46163 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1026 01:34:50.952725   46163 command_runner.go:130] > # reload'.
	I1026 01:34:50.952735   46163 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1026 01:34:50.952750   46163 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1026 01:34:50.952763   46163 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1026 01:34:50.952777   46163 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1026 01:34:50.952785   46163 command_runner.go:130] > [crio]
	I1026 01:34:50.952795   46163 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1026 01:34:50.952804   46163 command_runner.go:130] > # containers images, in this directory.
	I1026 01:34:50.952815   46163 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1026 01:34:50.952846   46163 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1026 01:34:50.952856   46163 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1026 01:34:50.952868   46163 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1026 01:34:50.952878   46163 command_runner.go:130] > # imagestore = ""
	I1026 01:34:50.952900   46163 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1026 01:34:50.952914   46163 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1026 01:34:50.952925   46163 command_runner.go:130] > storage_driver = "overlay"
	I1026 01:34:50.952936   46163 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1026 01:34:50.952948   46163 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1026 01:34:50.952958   46163 command_runner.go:130] > storage_option = [
	I1026 01:34:50.952969   46163 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1026 01:34:50.952977   46163 command_runner.go:130] > ]
	I1026 01:34:50.952989   46163 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1026 01:34:50.953002   46163 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1026 01:34:50.953012   46163 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1026 01:34:50.953019   46163 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1026 01:34:50.953028   46163 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1026 01:34:50.953036   46163 command_runner.go:130] > # always happen on a node reboot
	I1026 01:34:50.953047   46163 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1026 01:34:50.953059   46163 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1026 01:34:50.953068   46163 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1026 01:34:50.953073   46163 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1026 01:34:50.953078   46163 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1026 01:34:50.953085   46163 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1026 01:34:50.953094   46163 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1026 01:34:50.953098   46163 command_runner.go:130] > # internal_wipe = true
	I1026 01:34:50.953106   46163 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1026 01:34:50.953113   46163 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1026 01:34:50.953118   46163 command_runner.go:130] > # internal_repair = false
	I1026 01:34:50.953125   46163 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1026 01:34:50.953131   46163 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1026 01:34:50.953136   46163 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1026 01:34:50.953142   46163 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1026 01:34:50.953147   46163 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1026 01:34:50.953151   46163 command_runner.go:130] > [crio.api]
	I1026 01:34:50.953156   46163 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1026 01:34:50.953167   46163 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1026 01:34:50.953178   46163 command_runner.go:130] > # IP address on which the stream server will listen.
	I1026 01:34:50.953186   46163 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1026 01:34:50.953198   46163 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1026 01:34:50.953210   46163 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1026 01:34:50.953219   46163 command_runner.go:130] > # stream_port = "0"
	I1026 01:34:50.953228   46163 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1026 01:34:50.953236   46163 command_runner.go:130] > # stream_enable_tls = false
	I1026 01:34:50.953245   46163 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1026 01:34:50.953255   46163 command_runner.go:130] > # stream_idle_timeout = ""
	I1026 01:34:50.953266   46163 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1026 01:34:50.953279   46163 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1026 01:34:50.953287   46163 command_runner.go:130] > # minutes.
	I1026 01:34:50.953293   46163 command_runner.go:130] > # stream_tls_cert = ""
	I1026 01:34:50.953306   46163 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1026 01:34:50.953316   46163 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1026 01:34:50.953326   46163 command_runner.go:130] > # stream_tls_key = ""
	I1026 01:34:50.953336   46163 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1026 01:34:50.953348   46163 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1026 01:34:50.953367   46163 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1026 01:34:50.953377   46163 command_runner.go:130] > # stream_tls_ca = ""
	I1026 01:34:50.953390   46163 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1026 01:34:50.953400   46163 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1026 01:34:50.953411   46163 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1026 01:34:50.953433   46163 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1026 01:34:50.953447   46163 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1026 01:34:50.953459   46163 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1026 01:34:50.953468   46163 command_runner.go:130] > [crio.runtime]
	I1026 01:34:50.953478   46163 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1026 01:34:50.953489   46163 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1026 01:34:50.953496   46163 command_runner.go:130] > # "nofile=1024:2048"
	I1026 01:34:50.953506   46163 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1026 01:34:50.953516   46163 command_runner.go:130] > # default_ulimits = [
	I1026 01:34:50.953522   46163 command_runner.go:130] > # ]
	I1026 01:34:50.953534   46163 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1026 01:34:50.953543   46163 command_runner.go:130] > # no_pivot = false
	I1026 01:34:50.953552   46163 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1026 01:34:50.953561   46163 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1026 01:34:50.953566   46163 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1026 01:34:50.953582   46163 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1026 01:34:50.953593   46163 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1026 01:34:50.953603   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:34:50.953610   46163 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1026 01:34:50.953621   46163 command_runner.go:130] > # Cgroup setting for conmon
	I1026 01:34:50.953631   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1026 01:34:50.953638   46163 command_runner.go:130] > conmon_cgroup = "pod"
	I1026 01:34:50.953647   46163 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1026 01:34:50.953658   46163 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1026 01:34:50.953671   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:34:50.953677   46163 command_runner.go:130] > conmon_env = [
	I1026 01:34:50.953685   46163 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1026 01:34:50.953696   46163 command_runner.go:130] > ]
	I1026 01:34:50.953704   46163 command_runner.go:130] > # Additional environment variables to set for all the
	I1026 01:34:50.953715   46163 command_runner.go:130] > # containers. These are overridden if set in the
	I1026 01:34:50.953728   46163 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1026 01:34:50.953737   46163 command_runner.go:130] > # default_env = [
	I1026 01:34:50.953743   46163 command_runner.go:130] > # ]
	I1026 01:34:50.953755   46163 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1026 01:34:50.953770   46163 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1026 01:34:50.953782   46163 command_runner.go:130] > # selinux = false
	I1026 01:34:50.953791   46163 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1026 01:34:50.953804   46163 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1026 01:34:50.953816   46163 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1026 01:34:50.953826   46163 command_runner.go:130] > # seccomp_profile = ""
	I1026 01:34:50.953834   46163 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1026 01:34:50.953845   46163 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1026 01:34:50.953859   46163 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1026 01:34:50.953869   46163 command_runner.go:130] > # which might increase security.
	I1026 01:34:50.953876   46163 command_runner.go:130] > # This option is currently deprecated,
	I1026 01:34:50.953888   46163 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1026 01:34:50.953903   46163 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1026 01:34:50.953913   46163 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1026 01:34:50.953925   46163 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1026 01:34:50.953935   46163 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1026 01:34:50.953948   46163 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1026 01:34:50.953959   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.953972   46163 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1026 01:34:50.953985   46163 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1026 01:34:50.953994   46163 command_runner.go:130] > # the cgroup blockio controller.
	I1026 01:34:50.954001   46163 command_runner.go:130] > # blockio_config_file = ""
	I1026 01:34:50.954014   46163 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1026 01:34:50.954024   46163 command_runner.go:130] > # blockio parameters.
	I1026 01:34:50.954031   46163 command_runner.go:130] > # blockio_reload = false
	I1026 01:34:50.954045   46163 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1026 01:34:50.954053   46163 command_runner.go:130] > # irqbalance daemon.
	I1026 01:34:50.954062   46163 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1026 01:34:50.954075   46163 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1026 01:34:50.954090   46163 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1026 01:34:50.954103   46163 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1026 01:34:50.954115   46163 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1026 01:34:50.954129   46163 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1026 01:34:50.954139   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.954148   46163 command_runner.go:130] > # rdt_config_file = ""
	I1026 01:34:50.954160   46163 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1026 01:34:50.954170   46163 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1026 01:34:50.954198   46163 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1026 01:34:50.954209   46163 command_runner.go:130] > # separate_pull_cgroup = ""
	I1026 01:34:50.954218   46163 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1026 01:34:50.954229   46163 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1026 01:34:50.954238   46163 command_runner.go:130] > # will be added.
	I1026 01:34:50.954246   46163 command_runner.go:130] > # default_capabilities = [
	I1026 01:34:50.954254   46163 command_runner.go:130] > # 	"CHOWN",
	I1026 01:34:50.954264   46163 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1026 01:34:50.954271   46163 command_runner.go:130] > # 	"FSETID",
	I1026 01:34:50.954281   46163 command_runner.go:130] > # 	"FOWNER",
	I1026 01:34:50.954288   46163 command_runner.go:130] > # 	"SETGID",
	I1026 01:34:50.954296   46163 command_runner.go:130] > # 	"SETUID",
	I1026 01:34:50.954303   46163 command_runner.go:130] > # 	"SETPCAP",
	I1026 01:34:50.954312   46163 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1026 01:34:50.954319   46163 command_runner.go:130] > # 	"KILL",
	I1026 01:34:50.954328   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954340   46163 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1026 01:34:50.954352   46163 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1026 01:34:50.954361   46163 command_runner.go:130] > # add_inheritable_capabilities = false
	I1026 01:34:50.954372   46163 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1026 01:34:50.954383   46163 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:34:50.954392   46163 command_runner.go:130] > default_sysctls = [
	I1026 01:34:50.954402   46163 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1026 01:34:50.954408   46163 command_runner.go:130] > ]
	I1026 01:34:50.954414   46163 command_runner.go:130] > # List of devices on the host that a
	I1026 01:34:50.954425   46163 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1026 01:34:50.954435   46163 command_runner.go:130] > # allowed_devices = [
	I1026 01:34:50.954441   46163 command_runner.go:130] > # 	"/dev/fuse",
	I1026 01:34:50.954450   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954458   46163 command_runner.go:130] > # List of additional devices. specified as
	I1026 01:34:50.954471   46163 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1026 01:34:50.954482   46163 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1026 01:34:50.954494   46163 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:34:50.954503   46163 command_runner.go:130] > # additional_devices = [
	I1026 01:34:50.954507   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954512   46163 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1026 01:34:50.954522   46163 command_runner.go:130] > # cdi_spec_dirs = [
	I1026 01:34:50.954532   46163 command_runner.go:130] > # 	"/etc/cdi",
	I1026 01:34:50.954538   46163 command_runner.go:130] > # 	"/var/run/cdi",
	I1026 01:34:50.954547   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954557   46163 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1026 01:34:50.954569   46163 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1026 01:34:50.954578   46163 command_runner.go:130] > # Defaults to false.
	I1026 01:34:50.954585   46163 command_runner.go:130] > # device_ownership_from_security_context = false
	I1026 01:34:50.954597   46163 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1026 01:34:50.954605   46163 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1026 01:34:50.954609   46163 command_runner.go:130] > # hooks_dir = [
	I1026 01:34:50.954616   46163 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1026 01:34:50.954625   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954634   46163 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1026 01:34:50.954644   46163 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1026 01:34:50.954656   46163 command_runner.go:130] > # its default mounts from the following two files:
	I1026 01:34:50.954664   46163 command_runner.go:130] > #
	I1026 01:34:50.954673   46163 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1026 01:34:50.954686   46163 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1026 01:34:50.954703   46163 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1026 01:34:50.954711   46163 command_runner.go:130] > #
	I1026 01:34:50.954721   46163 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1026 01:34:50.954735   46163 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1026 01:34:50.954749   46163 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1026 01:34:50.954778   46163 command_runner.go:130] > #      only add mounts it finds in this file.
	I1026 01:34:50.954789   46163 command_runner.go:130] > #
	I1026 01:34:50.954796   46163 command_runner.go:130] > # default_mounts_file = ""
	I1026 01:34:50.954808   46163 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1026 01:34:50.954821   46163 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1026 01:34:50.954831   46163 command_runner.go:130] > pids_limit = 1024
	I1026 01:34:50.954840   46163 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1026 01:34:50.954852   46163 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1026 01:34:50.954863   46163 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1026 01:34:50.954875   46163 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1026 01:34:50.954885   46163 command_runner.go:130] > # log_size_max = -1
	I1026 01:34:50.954900   46163 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1026 01:34:50.954908   46163 command_runner.go:130] > # log_to_journald = false
	I1026 01:34:50.954917   46163 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1026 01:34:50.954925   46163 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1026 01:34:50.954935   46163 command_runner.go:130] > # Path to directory for container attach sockets.
	I1026 01:34:50.954947   46163 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1026 01:34:50.954955   46163 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1026 01:34:50.954964   46163 command_runner.go:130] > # bind_mount_prefix = ""
	I1026 01:34:50.954973   46163 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1026 01:34:50.954983   46163 command_runner.go:130] > # read_only = false
	I1026 01:34:50.954993   46163 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1026 01:34:50.955005   46163 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1026 01:34:50.955015   46163 command_runner.go:130] > # live configuration reload.
	I1026 01:34:50.955021   46163 command_runner.go:130] > # log_level = "info"
	I1026 01:34:50.955029   46163 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1026 01:34:50.955036   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.955056   46163 command_runner.go:130] > # log_filter = ""
	I1026 01:34:50.955064   46163 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1026 01:34:50.955070   46163 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1026 01:34:50.955076   46163 command_runner.go:130] > # separated by comma.
	I1026 01:34:50.955088   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955097   46163 command_runner.go:130] > # uid_mappings = ""
	I1026 01:34:50.955106   46163 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1026 01:34:50.955119   46163 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1026 01:34:50.955127   46163 command_runner.go:130] > # separated by comma.
	I1026 01:34:50.955139   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955149   46163 command_runner.go:130] > # gid_mappings = ""
	I1026 01:34:50.955158   46163 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1026 01:34:50.955169   46163 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:34:50.955185   46163 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:34:50.955200   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955210   46163 command_runner.go:130] > # minimum_mappable_uid = -1
	I1026 01:34:50.955220   46163 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1026 01:34:50.955233   46163 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:34:50.955245   46163 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:34:50.955257   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955267   46163 command_runner.go:130] > # minimum_mappable_gid = -1
	I1026 01:34:50.955280   46163 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1026 01:34:50.955291   46163 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1026 01:34:50.955303   46163 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1026 01:34:50.955313   46163 command_runner.go:130] > # ctr_stop_timeout = 30
	I1026 01:34:50.955323   46163 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1026 01:34:50.955334   46163 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1026 01:34:50.955345   46163 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1026 01:34:50.955365   46163 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1026 01:34:50.955373   46163 command_runner.go:130] > drop_infra_ctr = false
	I1026 01:34:50.955385   46163 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1026 01:34:50.955395   46163 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1026 01:34:50.955410   46163 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1026 01:34:50.955419   46163 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1026 01:34:50.955430   46163 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1026 01:34:50.955442   46163 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1026 01:34:50.955452   46163 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1026 01:34:50.955465   46163 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1026 01:34:50.955475   46163 command_runner.go:130] > # shared_cpuset = ""
	I1026 01:34:50.955483   46163 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1026 01:34:50.955490   46163 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1026 01:34:50.955495   46163 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1026 01:34:50.955505   46163 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1026 01:34:50.955515   46163 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1026 01:34:50.955523   46163 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1026 01:34:50.955537   46163 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1026 01:34:50.955546   46163 command_runner.go:130] > # enable_criu_support = false
	I1026 01:34:50.955554   46163 command_runner.go:130] > # Enable/disable the generation of the container,
	I1026 01:34:50.955575   46163 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1026 01:34:50.955585   46163 command_runner.go:130] > # enable_pod_events = false
	I1026 01:34:50.955595   46163 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:34:50.955607   46163 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:34:50.955615   46163 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1026 01:34:50.955624   46163 command_runner.go:130] > # default_runtime = "runc"
	I1026 01:34:50.955633   46163 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1026 01:34:50.955647   46163 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1026 01:34:50.955659   46163 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1026 01:34:50.955666   46163 command_runner.go:130] > # creation as a file is not desired either.
	I1026 01:34:50.955673   46163 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1026 01:34:50.955680   46163 command_runner.go:130] > # the hostname is being managed dynamically.
	I1026 01:34:50.955685   46163 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1026 01:34:50.955689   46163 command_runner.go:130] > # ]
	I1026 01:34:50.955707   46163 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1026 01:34:50.955718   46163 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1026 01:34:50.955724   46163 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1026 01:34:50.955731   46163 command_runner.go:130] > # Each entry in the table should follow the format:
	I1026 01:34:50.955734   46163 command_runner.go:130] > #
	I1026 01:34:50.955742   46163 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1026 01:34:50.955746   46163 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1026 01:34:50.955808   46163 command_runner.go:130] > # runtime_type = "oci"
	I1026 01:34:50.955821   46163 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1026 01:34:50.955825   46163 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1026 01:34:50.955830   46163 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1026 01:34:50.955834   46163 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1026 01:34:50.955840   46163 command_runner.go:130] > # monitor_env = []
	I1026 01:34:50.955847   46163 command_runner.go:130] > # privileged_without_host_devices = false
	I1026 01:34:50.955854   46163 command_runner.go:130] > # allowed_annotations = []
	I1026 01:34:50.955861   46163 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1026 01:34:50.955869   46163 command_runner.go:130] > # Where:
	I1026 01:34:50.955878   46163 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1026 01:34:50.955887   46163 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1026 01:34:50.955902   46163 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1026 01:34:50.955914   46163 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1026 01:34:50.955921   46163 command_runner.go:130] > #   in $PATH.
	I1026 01:34:50.955933   46163 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1026 01:34:50.955941   46163 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1026 01:34:50.955957   46163 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1026 01:34:50.955964   46163 command_runner.go:130] > #   state.
	I1026 01:34:50.955970   46163 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1026 01:34:50.955978   46163 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1026 01:34:50.955984   46163 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1026 01:34:50.955991   46163 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1026 01:34:50.956000   46163 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1026 01:34:50.956013   46163 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1026 01:34:50.956023   46163 command_runner.go:130] > #   The currently recognized values are:
	I1026 01:34:50.956033   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1026 01:34:50.956047   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1026 01:34:50.956059   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1026 01:34:50.956072   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1026 01:34:50.956086   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1026 01:34:50.956095   46163 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1026 01:34:50.956101   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1026 01:34:50.956109   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1026 01:34:50.956117   46163 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1026 01:34:50.956125   46163 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1026 01:34:50.956129   46163 command_runner.go:130] > #   deprecated option "conmon".
	I1026 01:34:50.956137   46163 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1026 01:34:50.956149   46163 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1026 01:34:50.956161   46163 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1026 01:34:50.956172   46163 command_runner.go:130] > #   should be moved to the container's cgroup
	I1026 01:34:50.956186   46163 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1026 01:34:50.956196   46163 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1026 01:34:50.956209   46163 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1026 01:34:50.956219   46163 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1026 01:34:50.956225   46163 command_runner.go:130] > #
	I1026 01:34:50.956233   46163 command_runner.go:130] > # Using the seccomp notifier feature:
	I1026 01:34:50.956242   46163 command_runner.go:130] > #
	I1026 01:34:50.956252   46163 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1026 01:34:50.956265   46163 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1026 01:34:50.956276   46163 command_runner.go:130] > #
	I1026 01:34:50.956288   46163 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1026 01:34:50.956299   46163 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1026 01:34:50.956307   46163 command_runner.go:130] > #
	I1026 01:34:50.956320   46163 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1026 01:34:50.956330   46163 command_runner.go:130] > # feature.
	I1026 01:34:50.956335   46163 command_runner.go:130] > #
	I1026 01:34:50.956345   46163 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1026 01:34:50.956357   46163 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1026 01:34:50.956370   46163 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1026 01:34:50.956382   46163 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1026 01:34:50.956391   46163 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1026 01:34:50.956395   46163 command_runner.go:130] > #
	I1026 01:34:50.956408   46163 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1026 01:34:50.956421   46163 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1026 01:34:50.956427   46163 command_runner.go:130] > #
	I1026 01:34:50.956439   46163 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1026 01:34:50.956453   46163 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1026 01:34:50.956461   46163 command_runner.go:130] > #
	I1026 01:34:50.956471   46163 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1026 01:34:50.956485   46163 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1026 01:34:50.956495   46163 command_runner.go:130] > # limitation.
	I1026 01:34:50.956502   46163 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1026 01:34:50.956513   46163 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1026 01:34:50.956520   46163 command_runner.go:130] > runtime_type = "oci"
	I1026 01:34:50.956530   46163 command_runner.go:130] > runtime_root = "/run/runc"
	I1026 01:34:50.956537   46163 command_runner.go:130] > runtime_config_path = ""
	I1026 01:34:50.956548   46163 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1026 01:34:50.956555   46163 command_runner.go:130] > monitor_cgroup = "pod"
	I1026 01:34:50.956565   46163 command_runner.go:130] > monitor_exec_cgroup = ""
	I1026 01:34:50.956571   46163 command_runner.go:130] > monitor_env = [
	I1026 01:34:50.956582   46163 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1026 01:34:50.956585   46163 command_runner.go:130] > ]
	I1026 01:34:50.956591   46163 command_runner.go:130] > privileged_without_host_devices = false
	I1026 01:34:50.956604   46163 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1026 01:34:50.956616   46163 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1026 01:34:50.956626   46163 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1026 01:34:50.956641   46163 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1026 01:34:50.956656   46163 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1026 01:34:50.956667   46163 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1026 01:34:50.956686   46163 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1026 01:34:50.956698   46163 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1026 01:34:50.956708   46163 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1026 01:34:50.956723   46163 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1026 01:34:50.956733   46163 command_runner.go:130] > # Example:
	I1026 01:34:50.956742   46163 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1026 01:34:50.956752   46163 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1026 01:34:50.956763   46163 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1026 01:34:50.956774   46163 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1026 01:34:50.956782   46163 command_runner.go:130] > # cpuset = 0
	I1026 01:34:50.956792   46163 command_runner.go:130] > # cpushares = "0-1"
	I1026 01:34:50.956799   46163 command_runner.go:130] > # Where:
	I1026 01:34:50.956807   46163 command_runner.go:130] > # The workload name is workload-type.
	I1026 01:34:50.956822   46163 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1026 01:34:50.956834   46163 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1026 01:34:50.956846   46163 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1026 01:34:50.956861   46163 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1026 01:34:50.956873   46163 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1026 01:34:50.956884   46163 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1026 01:34:50.956898   46163 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1026 01:34:50.956910   46163 command_runner.go:130] > # Default value is set to true
	I1026 01:34:50.956917   46163 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1026 01:34:50.956926   46163 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1026 01:34:50.956934   46163 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1026 01:34:50.956941   46163 command_runner.go:130] > # Default value is set to 'false'
	I1026 01:34:50.956951   46163 command_runner.go:130] > # disable_hostport_mapping = false
	I1026 01:34:50.956961   46163 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1026 01:34:50.956966   46163 command_runner.go:130] > #
	I1026 01:34:50.956972   46163 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1026 01:34:50.956980   46163 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1026 01:34:50.956990   46163 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1026 01:34:50.957001   46163 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1026 01:34:50.957009   46163 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1026 01:34:50.957014   46163 command_runner.go:130] > [crio.image]
	I1026 01:34:50.957022   46163 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1026 01:34:50.957029   46163 command_runner.go:130] > # default_transport = "docker://"
	I1026 01:34:50.957041   46163 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1026 01:34:50.957051   46163 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:34:50.957058   46163 command_runner.go:130] > # global_auth_file = ""
	I1026 01:34:50.957066   46163 command_runner.go:130] > # The image used to instantiate infra containers.
	I1026 01:34:50.957073   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.957080   46163 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1026 01:34:50.957090   46163 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1026 01:34:50.957107   46163 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:34:50.957116   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.957123   46163 command_runner.go:130] > # pause_image_auth_file = ""
	I1026 01:34:50.957132   46163 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1026 01:34:50.957142   46163 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1026 01:34:50.957152   46163 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1026 01:34:50.957160   46163 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1026 01:34:50.957167   46163 command_runner.go:130] > # pause_command = "/pause"
	I1026 01:34:50.957178   46163 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1026 01:34:50.957187   46163 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1026 01:34:50.957201   46163 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1026 01:34:50.957210   46163 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1026 01:34:50.957219   46163 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1026 01:34:50.957228   46163 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1026 01:34:50.957235   46163 command_runner.go:130] > # pinned_images = [
	I1026 01:34:50.957240   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957251   46163 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1026 01:34:50.957265   46163 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1026 01:34:50.957279   46163 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1026 01:34:50.957291   46163 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1026 01:34:50.957303   46163 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1026 01:34:50.957312   46163 command_runner.go:130] > # signature_policy = ""
	I1026 01:34:50.957321   46163 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1026 01:34:50.957333   46163 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1026 01:34:50.957339   46163 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1026 01:34:50.957348   46163 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1026 01:34:50.957354   46163 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1026 01:34:50.957360   46163 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1026 01:34:50.957369   46163 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1026 01:34:50.957377   46163 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1026 01:34:50.957381   46163 command_runner.go:130] > # changing them here.
	I1026 01:34:50.957387   46163 command_runner.go:130] > # insecure_registries = [
	I1026 01:34:50.957390   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957403   46163 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1026 01:34:50.957412   46163 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1026 01:34:50.957429   46163 command_runner.go:130] > # image_volumes = "mkdir"
	I1026 01:34:50.957437   46163 command_runner.go:130] > # Temporary directory to use for storing big files
	I1026 01:34:50.957448   46163 command_runner.go:130] > # big_files_temporary_dir = ""
	I1026 01:34:50.957458   46163 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1026 01:34:50.957467   46163 command_runner.go:130] > # CNI plugins.
	I1026 01:34:50.957474   46163 command_runner.go:130] > [crio.network]
	I1026 01:34:50.957485   46163 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1026 01:34:50.957497   46163 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1026 01:34:50.957505   46163 command_runner.go:130] > # cni_default_network = ""
	I1026 01:34:50.957513   46163 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1026 01:34:50.957518   46163 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1026 01:34:50.957525   46163 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1026 01:34:50.957532   46163 command_runner.go:130] > # plugin_dirs = [
	I1026 01:34:50.957536   46163 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1026 01:34:50.957541   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957547   46163 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1026 01:34:50.957553   46163 command_runner.go:130] > [crio.metrics]
	I1026 01:34:50.957557   46163 command_runner.go:130] > # Globally enable or disable metrics support.
	I1026 01:34:50.957564   46163 command_runner.go:130] > enable_metrics = true
	I1026 01:34:50.957568   46163 command_runner.go:130] > # Specify enabled metrics collectors.
	I1026 01:34:50.957575   46163 command_runner.go:130] > # Per default all metrics are enabled.
	I1026 01:34:50.957581   46163 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1026 01:34:50.957589   46163 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1026 01:34:50.957595   46163 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1026 01:34:50.957601   46163 command_runner.go:130] > # metrics_collectors = [
	I1026 01:34:50.957605   46163 command_runner.go:130] > # 	"operations",
	I1026 01:34:50.957612   46163 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1026 01:34:50.957617   46163 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1026 01:34:50.957621   46163 command_runner.go:130] > # 	"operations_errors",
	I1026 01:34:50.957627   46163 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1026 01:34:50.957631   46163 command_runner.go:130] > # 	"image_pulls_by_name",
	I1026 01:34:50.957646   46163 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1026 01:34:50.957653   46163 command_runner.go:130] > # 	"image_pulls_failures",
	I1026 01:34:50.957657   46163 command_runner.go:130] > # 	"image_pulls_successes",
	I1026 01:34:50.957663   46163 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1026 01:34:50.957667   46163 command_runner.go:130] > # 	"image_layer_reuse",
	I1026 01:34:50.957674   46163 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1026 01:34:50.957680   46163 command_runner.go:130] > # 	"containers_oom_total",
	I1026 01:34:50.957687   46163 command_runner.go:130] > # 	"containers_oom",
	I1026 01:34:50.957691   46163 command_runner.go:130] > # 	"processes_defunct",
	I1026 01:34:50.957697   46163 command_runner.go:130] > # 	"operations_total",
	I1026 01:34:50.957701   46163 command_runner.go:130] > # 	"operations_latency_seconds",
	I1026 01:34:50.957708   46163 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1026 01:34:50.957712   46163 command_runner.go:130] > # 	"operations_errors_total",
	I1026 01:34:50.957716   46163 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1026 01:34:50.957723   46163 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1026 01:34:50.957727   46163 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1026 01:34:50.957733   46163 command_runner.go:130] > # 	"image_pulls_success_total",
	I1026 01:34:50.957738   46163 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1026 01:34:50.957744   46163 command_runner.go:130] > # 	"containers_oom_count_total",
	I1026 01:34:50.957748   46163 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1026 01:34:50.957754   46163 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1026 01:34:50.957758   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957765   46163 command_runner.go:130] > # The port on which the metrics server will listen.
	I1026 01:34:50.957769   46163 command_runner.go:130] > # metrics_port = 9090
	I1026 01:34:50.957776   46163 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1026 01:34:50.957780   46163 command_runner.go:130] > # metrics_socket = ""
	I1026 01:34:50.957787   46163 command_runner.go:130] > # The certificate for the secure metrics server.
	I1026 01:34:50.957793   46163 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1026 01:34:50.957801   46163 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1026 01:34:50.957805   46163 command_runner.go:130] > # certificate on any modification event.
	I1026 01:34:50.957812   46163 command_runner.go:130] > # metrics_cert = ""
	I1026 01:34:50.957821   46163 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1026 01:34:50.957832   46163 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1026 01:34:50.957845   46163 command_runner.go:130] > # metrics_key = ""
	I1026 01:34:50.957853   46163 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1026 01:34:50.957860   46163 command_runner.go:130] > [crio.tracing]
	I1026 01:34:50.957865   46163 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1026 01:34:50.957871   46163 command_runner.go:130] > # enable_tracing = false
	I1026 01:34:50.957876   46163 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1026 01:34:50.957884   46163 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1026 01:34:50.957894   46163 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1026 01:34:50.957901   46163 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1026 01:34:50.957905   46163 command_runner.go:130] > # CRI-O NRI configuration.
	I1026 01:34:50.957911   46163 command_runner.go:130] > [crio.nri]
	I1026 01:34:50.957917   46163 command_runner.go:130] > # Globally enable or disable NRI.
	I1026 01:34:50.957923   46163 command_runner.go:130] > # enable_nri = false
	I1026 01:34:50.957927   46163 command_runner.go:130] > # NRI socket to listen on.
	I1026 01:34:50.957932   46163 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1026 01:34:50.957938   46163 command_runner.go:130] > # NRI plugin directory to use.
	I1026 01:34:50.957943   46163 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1026 01:34:50.957952   46163 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1026 01:34:50.957959   46163 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1026 01:34:50.957964   46163 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1026 01:34:50.957970   46163 command_runner.go:130] > # nri_disable_connections = false
	I1026 01:34:50.957976   46163 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1026 01:34:50.957982   46163 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1026 01:34:50.957987   46163 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1026 01:34:50.957994   46163 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1026 01:34:50.958000   46163 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1026 01:34:50.958006   46163 command_runner.go:130] > [crio.stats]
	I1026 01:34:50.958012   46163 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1026 01:34:50.958018   46163 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1026 01:34:50.958022   46163 command_runner.go:130] > # stats_collection_period = 0
	I1026 01:34:50.958668   46163 command_runner.go:130] ! time="2024-10-26 01:34:50.914024361Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1026 01:34:50.958695   46163 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1026 01:34:50.958769   46163 cni.go:84] Creating CNI manager for ""
	I1026 01:34:50.958783   46163 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1026 01:34:50.958795   46163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:34:50.958820   46163 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328488 NodeName:multinode-328488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:34:50.958987   46163 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328488"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.35"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:34:50.959061   46163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:34:50.968732   46163 command_runner.go:130] > kubeadm
	I1026 01:34:50.968754   46163 command_runner.go:130] > kubectl
	I1026 01:34:50.968758   46163 command_runner.go:130] > kubelet
	I1026 01:34:50.968796   46163 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:34:50.968842   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:34:50.977796   46163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1026 01:34:50.994030   46163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:34:51.009814   46163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 01:34:51.025838   46163 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I1026 01:34:51.029484   46163 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I1026 01:34:51.029670   46163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:34:51.162883   46163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:34:51.177166   46163 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488 for IP: 192.168.39.35
	I1026 01:34:51.177186   46163 certs.go:194] generating shared ca certs ...
	I1026 01:34:51.177201   46163 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:34:51.177351   46163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:34:51.177391   46163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:34:51.177401   46163 certs.go:256] generating profile certs ...
	I1026 01:34:51.177510   46163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/client.key
	I1026 01:34:51.177568   46163 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key.6d521543
	I1026 01:34:51.177605   46163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key
	I1026 01:34:51.177618   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:34:51.177634   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:34:51.177648   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:34:51.177661   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:34:51.177673   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:34:51.177685   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:34:51.177697   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:34:51.177709   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:34:51.177762   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:34:51.177795   46163 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:34:51.177809   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:34:51.177835   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:34:51.177857   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:34:51.177889   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:34:51.177926   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:34:51.177952   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.177965   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.177981   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.178545   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:34:51.203015   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:34:51.226538   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:34:51.250438   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:34:51.274389   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:34:51.299439   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:34:51.346100   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:34:51.370073   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:34:51.394061   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:34:51.417913   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:34:51.441314   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:34:51.464922   46163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:34:51.480778   46163 ssh_runner.go:195] Run: openssl version
	I1026 01:34:51.486390   46163 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1026 01:34:51.486469   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:34:51.497463   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501568   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501885   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501935   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.507271   46163 command_runner.go:130] > b5213941
	I1026 01:34:51.507341   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:34:51.516664   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:34:51.527486   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.531729   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.532095   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.532152   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.537534   46163 command_runner.go:130] > 51391683
	I1026 01:34:51.537676   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:34:51.546317   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:34:51.556109   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560060   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560219   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560272   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.565302   46163 command_runner.go:130] > 3ec20f2e
	I1026 01:34:51.565482   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:34:51.573828   46163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:34:51.577649   46163 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:34:51.577663   46163 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1026 01:34:51.577668   46163 command_runner.go:130] > Device: 253,1	Inode: 6291502     Links: 1
	I1026 01:34:51.577674   46163 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:34:51.577680   46163 command_runner.go:130] > Access: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577685   46163 command_runner.go:130] > Modify: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577696   46163 command_runner.go:130] > Change: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577700   46163 command_runner.go:130] >  Birth: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577800   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:34:51.582975   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.583034   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:34:51.588009   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.588055   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:34:51.593006   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.593188   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:34:51.598162   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.598355   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:34:51.603317   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.603465   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:34:51.608609   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.608854   46163 kubeadm.go:392] StartCluster: {Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:34:51.608964   46163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:34:51.609011   46163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:34:51.646979   46163 command_runner.go:130] > bc4e75868442bf940a705867036e2ae4099bb4db1651bf216fc502215b9c239d
	I1026 01:34:51.647006   46163 command_runner.go:130] > 95e559f893c174aa8b66984700b8cdaaeda1b662d69a8ff15021f775acd0671d
	I1026 01:34:51.647013   46163 command_runner.go:130] > e2187ce2d5e839cc3f2fa0ef2721c1dbfd167077c759594e9360e922c1d1100b
	I1026 01:34:51.647021   46163 command_runner.go:130] > de93b49883e4242d219cb67055a43628d428e32ab41baaf696c90993c288beea
	I1026 01:34:51.647093   46163 command_runner.go:130] > 3711c0271da051688fa1322358cd58eab86e9565d5a5961679a354d1d7de91bb
	I1026 01:34:51.647176   46163 command_runner.go:130] > 85f818a23be263dec89ee672e9a595a013940a7113d2587d88e63822d37824b9
	I1026 01:34:51.647244   46163 command_runner.go:130] > ea1ec21d25070478483636ee683170416b5266b38d0dcf7ba88c253fa585e905
	I1026 01:34:51.647342   46163 command_runner.go:130] > 810643c0c723504a6ccb55d66d2d93c6cb55373974a5ce23ee716c5689169b6d
	I1026 01:34:51.649116   46163 cri.go:89] found id: "bc4e75868442bf940a705867036e2ae4099bb4db1651bf216fc502215b9c239d"
	I1026 01:34:51.649131   46163 cri.go:89] found id: "95e559f893c174aa8b66984700b8cdaaeda1b662d69a8ff15021f775acd0671d"
	I1026 01:34:51.649135   46163 cri.go:89] found id: "e2187ce2d5e839cc3f2fa0ef2721c1dbfd167077c759594e9360e922c1d1100b"
	I1026 01:34:51.649139   46163 cri.go:89] found id: "de93b49883e4242d219cb67055a43628d428e32ab41baaf696c90993c288beea"
	I1026 01:34:51.649142   46163 cri.go:89] found id: "3711c0271da051688fa1322358cd58eab86e9565d5a5961679a354d1d7de91bb"
	I1026 01:34:51.649145   46163 cri.go:89] found id: "85f818a23be263dec89ee672e9a595a013940a7113d2587d88e63822d37824b9"
	I1026 01:34:51.649147   46163 cri.go:89] found id: "ea1ec21d25070478483636ee683170416b5266b38d0dcf7ba88c253fa585e905"
	I1026 01:34:51.649150   46163 cri.go:89] found id: "810643c0c723504a6ccb55d66d2d93c6cb55373974a5ce23ee716c5689169b6d"
	I1026 01:34:51.649152   46163 cri.go:89] found id: ""
	I1026 01:34:51.649193   46163 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-328488 -n multinode-328488
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-328488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 stop
E1026 01:36:56.029260   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328488 stop: exit status 82 (2m0.46774818s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-328488-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-328488 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status
E1026 01:38:52.966510   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 status: (18.875140523s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr: (3.359981344s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-328488 -n multinode-328488
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 logs -n 25: (2.026555287s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488:/home/docker/cp-test_multinode-328488-m02_multinode-328488.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488 sudo cat                                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m02_multinode-328488.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03:/home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488-m03 sudo cat                                   | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp testdata/cp-test.txt                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488:/home/docker/cp-test_multinode-328488-m03_multinode-328488.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488 sudo cat                                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m03_multinode-328488.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt                       | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m02:/home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n                                                                 | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | multinode-328488-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-328488 ssh -n multinode-328488-m02 sudo cat                                   | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	|         | /home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-328488 node stop m03                                                          | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:30 UTC |
	| node    | multinode-328488 node start                                                             | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:30 UTC | 26 Oct 24 01:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-328488                                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC |                     |
	| stop    | -p multinode-328488                                                                     | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC |                     |
	| start   | -p multinode-328488                                                                     | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:33 UTC | 26 Oct 24 01:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-328488                                                                | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:36 UTC |                     |
	| node    | multinode-328488 node delete                                                            | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:36 UTC | 26 Oct 24 01:36 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-328488 stop                                                                   | multinode-328488 | jenkins | v1.34.0 | 26 Oct 24 01:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 01:33:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:33:17.413842   46163 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:33:17.413939   46163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:33:17.413944   46163 out.go:358] Setting ErrFile to fd 2...
	I1026 01:33:17.413948   46163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:33:17.414151   46163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:33:17.414665   46163 out.go:352] Setting JSON to false
	I1026 01:33:17.415498   46163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4537,"bootTime":1729901860,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:33:17.415592   46163 start.go:139] virtualization: kvm guest
	I1026 01:33:17.417552   46163 out.go:177] * [multinode-328488] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:33:17.418729   46163 notify.go:220] Checking for updates...
	I1026 01:33:17.418738   46163 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:33:17.419984   46163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:33:17.421182   46163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:33:17.422357   46163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:33:17.423369   46163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:33:17.424570   46163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:33:17.425993   46163 config.go:182] Loaded profile config "multinode-328488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:33:17.426073   46163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:33:17.426507   46163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:33:17.426545   46163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:33:17.441351   46163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I1026 01:33:17.441872   46163 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:33:17.442444   46163 main.go:141] libmachine: Using API Version  1
	I1026 01:33:17.442466   46163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:33:17.442808   46163 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:33:17.442996   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.477474   46163 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 01:33:17.478652   46163 start.go:297] selected driver: kvm2
	I1026 01:33:17.478666   46163 start.go:901] validating driver "kvm2" against &{Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:33:17.478811   46163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:33:17.479113   46163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:33:17.479196   46163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:33:17.494062   46163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:33:17.494867   46163 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:33:17.494924   46163 cni.go:84] Creating CNI manager for ""
	I1026 01:33:17.494991   46163 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1026 01:33:17.495070   46163 start.go:340] cluster config:
	{Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:33:17.495234   46163 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:33:17.497245   46163 out.go:177] * Starting "multinode-328488" primary control-plane node in "multinode-328488" cluster
	I1026 01:33:17.498515   46163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:33:17.498562   46163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 01:33:17.498572   46163 cache.go:56] Caching tarball of preloaded images
	I1026 01:33:17.498675   46163 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:33:17.498689   46163 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:33:17.498797   46163 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/config.json ...
	I1026 01:33:17.499002   46163 start.go:360] acquireMachinesLock for multinode-328488: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:33:17.499045   46163 start.go:364] duration metric: took 23.997µs to acquireMachinesLock for "multinode-328488"
	I1026 01:33:17.499064   46163 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:33:17.499073   46163 fix.go:54] fixHost starting: 
	I1026 01:33:17.499325   46163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:33:17.499361   46163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:33:17.513842   46163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1026 01:33:17.514300   46163 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:33:17.514862   46163 main.go:141] libmachine: Using API Version  1
	I1026 01:33:17.514884   46163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:33:17.515224   46163 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:33:17.515418   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.515567   46163 main.go:141] libmachine: (multinode-328488) Calling .GetState
	I1026 01:33:17.517117   46163 fix.go:112] recreateIfNeeded on multinode-328488: state=Running err=<nil>
	W1026 01:33:17.517140   46163 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:33:17.519133   46163 out.go:177] * Updating the running kvm2 "multinode-328488" VM ...
	I1026 01:33:17.520376   46163 machine.go:93] provisionDockerMachine start ...
	I1026 01:33:17.520395   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:33:17.520611   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.522975   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.523399   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.523431   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.523545   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.523744   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.523890   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.524026   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.524143   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.524325   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.524336   46163 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:33:17.642289   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328488
	
	I1026 01:33:17.642318   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.642538   46163 buildroot.go:166] provisioning hostname "multinode-328488"
	I1026 01:33:17.642561   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.642711   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.645380   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.645846   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.645872   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.646008   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.646169   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.646295   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.646414   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.646576   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.646784   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.646796   46163 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-328488 && echo "multinode-328488" | sudo tee /etc/hostname
	I1026 01:33:17.775394   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-328488
	
	I1026 01:33:17.775421   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.778193   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.778555   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.778590   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.778718   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:17.778916   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.779047   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:17.779170   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:17.779328   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:17.779495   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:17.779512   46163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-328488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-328488/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-328488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:33:17.889827   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:33:17.889858   46163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:33:17.889879   46163 buildroot.go:174] setting up certificates
	I1026 01:33:17.889898   46163 provision.go:84] configureAuth start
	I1026 01:33:17.889912   46163 main.go:141] libmachine: (multinode-328488) Calling .GetMachineName
	I1026 01:33:17.890200   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:33:17.892550   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.892917   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.892946   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.893099   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:17.895364   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.895639   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:17.895666   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:17.895782   46163 provision.go:143] copyHostCerts
	I1026 01:33:17.895810   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:33:17.895850   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:33:17.895861   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:33:17.895945   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:33:17.896041   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:33:17.896067   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:33:17.896076   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:33:17.896113   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:33:17.896173   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:33:17.896195   46163 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:33:17.896202   46163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:33:17.896233   46163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:33:17.896302   46163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.multinode-328488 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-328488]
	I1026 01:33:18.046434   46163 provision.go:177] copyRemoteCerts
	I1026 01:33:18.046487   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:33:18.046509   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:18.049183   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.049535   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:18.049563   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.049737   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:18.049885   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.050052   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:18.050135   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:33:18.136102   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1026 01:33:18.136162   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:33:18.161196   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1026 01:33:18.161252   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1026 01:33:18.185805   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1026 01:33:18.185883   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:33:18.210153   46163 provision.go:87] duration metric: took 320.240077ms to configureAuth
	I1026 01:33:18.210191   46163 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:33:18.210433   46163 config.go:182] Loaded profile config "multinode-328488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:33:18.210500   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:33:18.213140   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.213644   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:33:18.213689   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:33:18.213937   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:33:18.214109   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.214250   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:33:18.214373   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:33:18.214560   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:33:18.214737   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:33:18.214755   46163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:34:49.032290   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:34:49.032336   46163 machine.go:96] duration metric: took 1m31.511932811s to provisionDockerMachine
	I1026 01:34:49.032354   46163 start.go:293] postStartSetup for "multinode-328488" (driver="kvm2")
	I1026 01:34:49.032370   46163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:34:49.032397   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.032710   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:34:49.032745   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.036094   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.036564   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.036596   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.036745   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.036950   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.037090   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.037204   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.124246   46163 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:34:49.128168   46163 command_runner.go:130] > NAME=Buildroot
	I1026 01:34:49.128188   46163 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1026 01:34:49.128194   46163 command_runner.go:130] > ID=buildroot
	I1026 01:34:49.128202   46163 command_runner.go:130] > VERSION_ID=2023.02.9
	I1026 01:34:49.128210   46163 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1026 01:34:49.128300   46163 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:34:49.128322   46163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:34:49.128394   46163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:34:49.128485   46163 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:34:49.128496   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /etc/ssl/certs/176152.pem
	I1026 01:34:49.128617   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:34:49.137348   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:34:49.159468   46163 start.go:296] duration metric: took 127.100086ms for postStartSetup
	I1026 01:34:49.159508   46163 fix.go:56] duration metric: took 1m31.660434402s for fixHost
	I1026 01:34:49.159531   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.162346   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.162710   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.162732   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.162913   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.163084   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.163220   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.163324   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.163471   46163 main.go:141] libmachine: Using SSH client type: native
	I1026 01:34:49.163635   46163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1026 01:34:49.163646   46163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:34:49.273919   46163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729906489.248573908
	
	I1026 01:34:49.273944   46163 fix.go:216] guest clock: 1729906489.248573908
	I1026 01:34:49.273951   46163 fix.go:229] Guest: 2024-10-26 01:34:49.248573908 +0000 UTC Remote: 2024-10-26 01:34:49.159513005 +0000 UTC m=+91.782993940 (delta=89.060903ms)
	I1026 01:34:49.273995   46163 fix.go:200] guest clock delta is within tolerance: 89.060903ms
	I1026 01:34:49.274001   46163 start.go:83] releasing machines lock for "multinode-328488", held for 1m31.774945295s
	I1026 01:34:49.274018   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.274252   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:34:49.276716   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.277062   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.277090   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.277230   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.277751   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.277909   46163 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:34:49.278013   46163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:34:49.278057   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.278114   46163 ssh_runner.go:195] Run: cat /version.json
	I1026 01:34:49.278140   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:34:49.280630   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.280896   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281011   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.281044   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281183   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.281352   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:49.281374   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.281375   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:49.281532   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:34:49.281544   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.281711   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:34:49.281703   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.281824   46163 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:34:49.281950   46163 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:34:49.394245   46163 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1026 01:34:49.394293   46163 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1026 01:34:49.394455   46163 ssh_runner.go:195] Run: systemctl --version
	I1026 01:34:49.400364   46163 command_runner.go:130] > systemd 252 (252)
	I1026 01:34:49.400408   46163 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1026 01:34:49.400476   46163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:34:49.557526   46163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:34:49.565251   46163 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1026 01:34:49.565313   46163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:34:49.565361   46163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:34:49.574827   46163 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:34:49.574856   46163 start.go:495] detecting cgroup driver to use...
	I1026 01:34:49.574923   46163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:34:49.591196   46163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:34:49.605179   46163 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:34:49.605239   46163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:34:49.618964   46163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:34:49.633086   46163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:34:49.781533   46163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:34:49.938597   46163 docker.go:233] disabling docker service ...
	I1026 01:34:49.938676   46163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:34:49.957306   46163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:34:49.970947   46163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:34:50.108603   46163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:34:50.244548   46163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:34:50.258723   46163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:34:50.275991   46163 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1026 01:34:50.276240   46163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:34:50.276323   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.287767   46163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:34:50.287859   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.298540   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.309248   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.320754   46163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:34:50.332192   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.343125   46163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.354191   46163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:34:50.365288   46163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:34:50.375341   46163 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1026 01:34:50.375434   46163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:34:50.385434   46163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:34:50.519806   46163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:34:50.711608   46163 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:34:50.711677   46163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:34:50.716212   46163 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1026 01:34:50.716228   46163 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1026 01:34:50.716234   46163 command_runner.go:130] > Device: 0,22	Inode: 1281        Links: 1
	I1026 01:34:50.716243   46163 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:34:50.716248   46163 command_runner.go:130] > Access: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716253   46163 command_runner.go:130] > Modify: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716260   46163 command_runner.go:130] > Change: 2024-10-26 01:34:50.584582012 +0000
	I1026 01:34:50.716265   46163 command_runner.go:130] >  Birth: -
	I1026 01:34:50.716378   46163 start.go:563] Will wait 60s for crictl version
	I1026 01:34:50.716441   46163 ssh_runner.go:195] Run: which crictl
	I1026 01:34:50.719880   46163 command_runner.go:130] > /usr/bin/crictl
	I1026 01:34:50.719959   46163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:34:50.758298   46163 command_runner.go:130] > Version:  0.1.0
	I1026 01:34:50.758322   46163 command_runner.go:130] > RuntimeName:  cri-o
	I1026 01:34:50.758327   46163 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1026 01:34:50.758333   46163 command_runner.go:130] > RuntimeApiVersion:  v1
	I1026 01:34:50.758398   46163 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:34:50.758499   46163 ssh_runner.go:195] Run: crio --version
	I1026 01:34:50.786260   46163 command_runner.go:130] > crio version 1.29.1
	I1026 01:34:50.786289   46163 command_runner.go:130] > Version:        1.29.1
	I1026 01:34:50.786298   46163 command_runner.go:130] > GitCommit:      unknown
	I1026 01:34:50.786325   46163 command_runner.go:130] > GitCommitDate:  unknown
	I1026 01:34:50.786332   46163 command_runner.go:130] > GitTreeState:   clean
	I1026 01:34:50.786341   46163 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1026 01:34:50.786347   46163 command_runner.go:130] > GoVersion:      go1.21.6
	I1026 01:34:50.786355   46163 command_runner.go:130] > Compiler:       gc
	I1026 01:34:50.786362   46163 command_runner.go:130] > Platform:       linux/amd64
	I1026 01:34:50.786369   46163 command_runner.go:130] > Linkmode:       dynamic
	I1026 01:34:50.786377   46163 command_runner.go:130] > BuildTags:      
	I1026 01:34:50.786386   46163 command_runner.go:130] >   containers_image_ostree_stub
	I1026 01:34:50.786393   46163 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1026 01:34:50.786403   46163 command_runner.go:130] >   btrfs_noversion
	I1026 01:34:50.786411   46163 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1026 01:34:50.786417   46163 command_runner.go:130] >   libdm_no_deferred_remove
	I1026 01:34:50.786424   46163 command_runner.go:130] >   seccomp
	I1026 01:34:50.786432   46163 command_runner.go:130] > LDFlags:          unknown
	I1026 01:34:50.786439   46163 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:34:50.786447   46163 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:34:50.787722   46163 ssh_runner.go:195] Run: crio --version
	I1026 01:34:50.815858   46163 command_runner.go:130] > crio version 1.29.1
	I1026 01:34:50.815899   46163 command_runner.go:130] > Version:        1.29.1
	I1026 01:34:50.815905   46163 command_runner.go:130] > GitCommit:      unknown
	I1026 01:34:50.815909   46163 command_runner.go:130] > GitCommitDate:  unknown
	I1026 01:34:50.815913   46163 command_runner.go:130] > GitTreeState:   clean
	I1026 01:34:50.815918   46163 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1026 01:34:50.815922   46163 command_runner.go:130] > GoVersion:      go1.21.6
	I1026 01:34:50.815926   46163 command_runner.go:130] > Compiler:       gc
	I1026 01:34:50.815930   46163 command_runner.go:130] > Platform:       linux/amd64
	I1026 01:34:50.815934   46163 command_runner.go:130] > Linkmode:       dynamic
	I1026 01:34:50.815942   46163 command_runner.go:130] > BuildTags:      
	I1026 01:34:50.815948   46163 command_runner.go:130] >   containers_image_ostree_stub
	I1026 01:34:50.815952   46163 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1026 01:34:50.815960   46163 command_runner.go:130] >   btrfs_noversion
	I1026 01:34:50.815964   46163 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1026 01:34:50.815967   46163 command_runner.go:130] >   libdm_no_deferred_remove
	I1026 01:34:50.815971   46163 command_runner.go:130] >   seccomp
	I1026 01:34:50.815978   46163 command_runner.go:130] > LDFlags:          unknown
	I1026 01:34:50.815981   46163 command_runner.go:130] > SeccompEnabled:   true
	I1026 01:34:50.815986   46163 command_runner.go:130] > AppArmorEnabled:  false
	I1026 01:34:50.819370   46163 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:34:50.820742   46163 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:34:50.823538   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:50.823872   46163 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:34:50.823902   46163 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:34:50.824138   46163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:34:50.828384   46163 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1026 01:34:50.828519   46163 kubeadm.go:883] updating cluster {Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:34:50.828699   46163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:34:50.828758   46163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:34:50.870101   46163 command_runner.go:130] > {
	I1026 01:34:50.870128   46163 command_runner.go:130] >   "images": [
	I1026 01:34:50.870134   46163 command_runner.go:130] >     {
	I1026 01:34:50.870144   46163 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1026 01:34:50.870150   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870158   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1026 01:34:50.870164   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870170   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870183   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1026 01:34:50.870196   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1026 01:34:50.870217   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870229   46163 command_runner.go:130] >       "size": "94965812",
	I1026 01:34:50.870238   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870247   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870258   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870268   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870274   46163 command_runner.go:130] >     },
	I1026 01:34:50.870282   46163 command_runner.go:130] >     {
	I1026 01:34:50.870293   46163 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1026 01:34:50.870303   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870313   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1026 01:34:50.870322   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870332   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870347   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1026 01:34:50.870361   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1026 01:34:50.870374   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870384   46163 command_runner.go:130] >       "size": "1363676",
	I1026 01:34:50.870392   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870410   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870419   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870426   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870436   46163 command_runner.go:130] >     },
	I1026 01:34:50.870444   46163 command_runner.go:130] >     {
	I1026 01:34:50.870455   46163 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:34:50.870466   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870478   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:34:50.870487   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870495   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870511   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:34:50.870527   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:34:50.870535   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870543   46163 command_runner.go:130] >       "size": "31470524",
	I1026 01:34:50.870553   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870568   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870578   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870585   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870593   46163 command_runner.go:130] >     },
	I1026 01:34:50.870600   46163 command_runner.go:130] >     {
	I1026 01:34:50.870613   46163 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1026 01:34:50.870622   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870631   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1026 01:34:50.870639   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870647   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870663   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1026 01:34:50.870692   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1026 01:34:50.870702   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870708   46163 command_runner.go:130] >       "size": "63273227",
	I1026 01:34:50.870715   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.870722   46163 command_runner.go:130] >       "username": "nonroot",
	I1026 01:34:50.870732   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870740   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870748   46163 command_runner.go:130] >     },
	I1026 01:34:50.870754   46163 command_runner.go:130] >     {
	I1026 01:34:50.870767   46163 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1026 01:34:50.870774   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870782   46163 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1026 01:34:50.870789   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870799   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870814   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1026 01:34:50.870828   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1026 01:34:50.870836   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870844   46163 command_runner.go:130] >       "size": "149009664",
	I1026 01:34:50.870854   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.870863   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.870871   46163 command_runner.go:130] >       },
	I1026 01:34:50.870879   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.870895   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.870906   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.870913   46163 command_runner.go:130] >     },
	I1026 01:34:50.870920   46163 command_runner.go:130] >     {
	I1026 01:34:50.870932   46163 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1026 01:34:50.870940   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.870951   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1026 01:34:50.870960   46163 command_runner.go:130] >       ],
	I1026 01:34:50.870968   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.870984   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1026 01:34:50.870999   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1026 01:34:50.871007   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871015   46163 command_runner.go:130] >       "size": "95274464",
	I1026 01:34:50.871031   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.871041   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.871047   46163 command_runner.go:130] >       },
	I1026 01:34:50.871054   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871064   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.871072   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.871081   46163 command_runner.go:130] >     },
	I1026 01:34:50.871088   46163 command_runner.go:130] >     {
	I1026 01:34:50.871101   46163 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1026 01:34:50.871111   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.871122   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1026 01:34:50.871131   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871138   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.871154   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1026 01:34:50.871170   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1026 01:34:50.871180   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871188   46163 command_runner.go:130] >       "size": "89474374",
	I1026 01:34:50.871198   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.871207   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.871214   46163 command_runner.go:130] >       },
	I1026 01:34:50.871235   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871245   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.871252   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.871259   46163 command_runner.go:130] >     },
	I1026 01:34:50.871266   46163 command_runner.go:130] >     {
	I1026 01:34:50.871279   46163 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1026 01:34:50.871289   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.871298   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1026 01:34:50.871307   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871315   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.871741   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1026 01:34:50.871802   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1026 01:34:50.871818   46163 command_runner.go:130] >       ],
	I1026 01:34:50.871834   46163 command_runner.go:130] >       "size": "92783513",
	I1026 01:34:50.871856   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.871871   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.871994   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872012   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.872027   46163 command_runner.go:130] >     },
	I1026 01:34:50.872040   46163 command_runner.go:130] >     {
	I1026 01:34:50.872066   46163 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1026 01:34:50.872080   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.872097   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1026 01:34:50.872112   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872127   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.872153   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1026 01:34:50.872173   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1026 01:34:50.872193   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872207   46163 command_runner.go:130] >       "size": "68457798",
	I1026 01:34:50.872221   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.872236   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.872250   46163 command_runner.go:130] >       },
	I1026 01:34:50.872265   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.872305   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872320   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.872334   46163 command_runner.go:130] >     },
	I1026 01:34:50.872348   46163 command_runner.go:130] >     {
	I1026 01:34:50.872364   46163 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1026 01:34:50.872379   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.872400   46163 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1026 01:34:50.872413   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872428   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.872447   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1026 01:34:50.872472   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1026 01:34:50.872486   46163 command_runner.go:130] >       ],
	I1026 01:34:50.872501   46163 command_runner.go:130] >       "size": "742080",
	I1026 01:34:50.872515   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.872535   46163 command_runner.go:130] >         "value": "65535"
	I1026 01:34:50.872550   46163 command_runner.go:130] >       },
	I1026 01:34:50.872564   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.872578   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.872592   46163 command_runner.go:130] >       "pinned": true
	I1026 01:34:50.872624   46163 command_runner.go:130] >     }
	I1026 01:34:50.872635   46163 command_runner.go:130] >   ]
	I1026 01:34:50.872673   46163 command_runner.go:130] > }
	I1026 01:34:50.873301   46163 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:34:50.873317   46163 crio.go:433] Images already preloaded, skipping extraction
	I1026 01:34:50.873361   46163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:34:50.905504   46163 command_runner.go:130] > {
	I1026 01:34:50.905536   46163 command_runner.go:130] >   "images": [
	I1026 01:34:50.905543   46163 command_runner.go:130] >     {
	I1026 01:34:50.905551   46163 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1026 01:34:50.905557   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905563   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1026 01:34:50.905567   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905571   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905581   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1026 01:34:50.905589   46163 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1026 01:34:50.905592   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905598   46163 command_runner.go:130] >       "size": "94965812",
	I1026 01:34:50.905602   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905609   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905616   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905620   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905626   46163 command_runner.go:130] >     },
	I1026 01:34:50.905629   46163 command_runner.go:130] >     {
	I1026 01:34:50.905635   46163 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1026 01:34:50.905639   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905648   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1026 01:34:50.905652   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905658   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905664   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1026 01:34:50.905674   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1026 01:34:50.905677   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905681   46163 command_runner.go:130] >       "size": "1363676",
	I1026 01:34:50.905691   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905700   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905704   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905708   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905711   46163 command_runner.go:130] >     },
	I1026 01:34:50.905715   46163 command_runner.go:130] >     {
	I1026 01:34:50.905721   46163 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1026 01:34:50.905726   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905731   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1026 01:34:50.905734   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905744   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905754   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1026 01:34:50.905761   46163 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1026 01:34:50.905767   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905771   46163 command_runner.go:130] >       "size": "31470524",
	I1026 01:34:50.905775   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905780   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905786   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905789   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905793   46163 command_runner.go:130] >     },
	I1026 01:34:50.905798   46163 command_runner.go:130] >     {
	I1026 01:34:50.905804   46163 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1026 01:34:50.905811   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905816   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1026 01:34:50.905820   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905823   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905832   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1026 01:34:50.905843   46163 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1026 01:34:50.905849   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905853   46163 command_runner.go:130] >       "size": "63273227",
	I1026 01:34:50.905857   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.905863   46163 command_runner.go:130] >       "username": "nonroot",
	I1026 01:34:50.905870   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905873   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905877   46163 command_runner.go:130] >     },
	I1026 01:34:50.905881   46163 command_runner.go:130] >     {
	I1026 01:34:50.905887   46163 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1026 01:34:50.905898   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.905903   46163 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1026 01:34:50.905909   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905913   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.905919   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1026 01:34:50.905926   46163 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1026 01:34:50.905937   46163 command_runner.go:130] >       ],
	I1026 01:34:50.905943   46163 command_runner.go:130] >       "size": "149009664",
	I1026 01:34:50.905947   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.905953   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.905957   46163 command_runner.go:130] >       },
	I1026 01:34:50.905960   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.905964   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.905968   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.905972   46163 command_runner.go:130] >     },
	I1026 01:34:50.905975   46163 command_runner.go:130] >     {
	I1026 01:34:50.905987   46163 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1026 01:34:50.905993   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906001   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1026 01:34:50.906006   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906011   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906026   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1026 01:34:50.906036   46163 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1026 01:34:50.906041   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906046   46163 command_runner.go:130] >       "size": "95274464",
	I1026 01:34:50.906052   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906058   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906063   46163 command_runner.go:130] >       },
	I1026 01:34:50.906070   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906078   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906086   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906092   46163 command_runner.go:130] >     },
	I1026 01:34:50.906101   46163 command_runner.go:130] >     {
	I1026 01:34:50.906112   46163 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1026 01:34:50.906121   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906130   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1026 01:34:50.906137   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906147   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906160   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1026 01:34:50.906183   46163 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1026 01:34:50.906194   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906201   46163 command_runner.go:130] >       "size": "89474374",
	I1026 01:34:50.906208   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906217   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906225   46163 command_runner.go:130] >       },
	I1026 01:34:50.906235   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906242   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906249   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906258   46163 command_runner.go:130] >     },
	I1026 01:34:50.906265   46163 command_runner.go:130] >     {
	I1026 01:34:50.906278   46163 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1026 01:34:50.906288   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906300   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1026 01:34:50.906307   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906316   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906347   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1026 01:34:50.906361   46163 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1026 01:34:50.906368   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906378   46163 command_runner.go:130] >       "size": "92783513",
	I1026 01:34:50.906388   46163 command_runner.go:130] >       "uid": null,
	I1026 01:34:50.906395   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906405   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906414   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906421   46163 command_runner.go:130] >     },
	I1026 01:34:50.906428   46163 command_runner.go:130] >     {
	I1026 01:34:50.906441   46163 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1026 01:34:50.906451   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906463   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1026 01:34:50.906471   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906478   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906494   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1026 01:34:50.906510   46163 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1026 01:34:50.906527   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906537   46163 command_runner.go:130] >       "size": "68457798",
	I1026 01:34:50.906546   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906554   46163 command_runner.go:130] >         "value": "0"
	I1026 01:34:50.906562   46163 command_runner.go:130] >       },
	I1026 01:34:50.906570   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906579   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906588   46163 command_runner.go:130] >       "pinned": false
	I1026 01:34:50.906596   46163 command_runner.go:130] >     },
	I1026 01:34:50.906603   46163 command_runner.go:130] >     {
	I1026 01:34:50.906616   46163 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1026 01:34:50.906626   46163 command_runner.go:130] >       "repoTags": [
	I1026 01:34:50.906636   46163 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1026 01:34:50.906641   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906647   46163 command_runner.go:130] >       "repoDigests": [
	I1026 01:34:50.906659   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1026 01:34:50.906676   46163 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1026 01:34:50.906685   46163 command_runner.go:130] >       ],
	I1026 01:34:50.906693   46163 command_runner.go:130] >       "size": "742080",
	I1026 01:34:50.906701   46163 command_runner.go:130] >       "uid": {
	I1026 01:34:50.906708   46163 command_runner.go:130] >         "value": "65535"
	I1026 01:34:50.906717   46163 command_runner.go:130] >       },
	I1026 01:34:50.906725   46163 command_runner.go:130] >       "username": "",
	I1026 01:34:50.906740   46163 command_runner.go:130] >       "spec": null,
	I1026 01:34:50.906747   46163 command_runner.go:130] >       "pinned": true
	I1026 01:34:50.906756   46163 command_runner.go:130] >     }
	I1026 01:34:50.906762   46163 command_runner.go:130] >   ]
	I1026 01:34:50.906770   46163 command_runner.go:130] > }
	I1026 01:34:50.906901   46163 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:34:50.906913   46163 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:34:50.906921   46163 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.2 crio true true} ...
	I1026 01:34:50.907034   46163 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-328488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:34:50.907118   46163 ssh_runner.go:195] Run: crio config
	I1026 01:34:50.952629   46163 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1026 01:34:50.952658   46163 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1026 01:34:50.952668   46163 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1026 01:34:50.952673   46163 command_runner.go:130] > #
	I1026 01:34:50.952684   46163 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1026 01:34:50.952692   46163 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1026 01:34:50.952700   46163 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1026 01:34:50.952718   46163 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1026 01:34:50.952725   46163 command_runner.go:130] > # reload'.
	I1026 01:34:50.952735   46163 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1026 01:34:50.952750   46163 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1026 01:34:50.952763   46163 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1026 01:34:50.952777   46163 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1026 01:34:50.952785   46163 command_runner.go:130] > [crio]
	I1026 01:34:50.952795   46163 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1026 01:34:50.952804   46163 command_runner.go:130] > # containers images, in this directory.
	I1026 01:34:50.952815   46163 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1026 01:34:50.952846   46163 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1026 01:34:50.952856   46163 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1026 01:34:50.952868   46163 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1026 01:34:50.952878   46163 command_runner.go:130] > # imagestore = ""
	I1026 01:34:50.952900   46163 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1026 01:34:50.952914   46163 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1026 01:34:50.952925   46163 command_runner.go:130] > storage_driver = "overlay"
	I1026 01:34:50.952936   46163 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1026 01:34:50.952948   46163 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1026 01:34:50.952958   46163 command_runner.go:130] > storage_option = [
	I1026 01:34:50.952969   46163 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1026 01:34:50.952977   46163 command_runner.go:130] > ]
	I1026 01:34:50.952989   46163 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1026 01:34:50.953002   46163 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1026 01:34:50.953012   46163 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1026 01:34:50.953019   46163 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1026 01:34:50.953028   46163 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1026 01:34:50.953036   46163 command_runner.go:130] > # always happen on a node reboot
	I1026 01:34:50.953047   46163 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1026 01:34:50.953059   46163 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1026 01:34:50.953068   46163 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1026 01:34:50.953073   46163 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1026 01:34:50.953078   46163 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1026 01:34:50.953085   46163 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1026 01:34:50.953094   46163 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1026 01:34:50.953098   46163 command_runner.go:130] > # internal_wipe = true
	I1026 01:34:50.953106   46163 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1026 01:34:50.953113   46163 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1026 01:34:50.953118   46163 command_runner.go:130] > # internal_repair = false
	I1026 01:34:50.953125   46163 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1026 01:34:50.953131   46163 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1026 01:34:50.953136   46163 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1026 01:34:50.953142   46163 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1026 01:34:50.953147   46163 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1026 01:34:50.953151   46163 command_runner.go:130] > [crio.api]
	I1026 01:34:50.953156   46163 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1026 01:34:50.953167   46163 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1026 01:34:50.953178   46163 command_runner.go:130] > # IP address on which the stream server will listen.
	I1026 01:34:50.953186   46163 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1026 01:34:50.953198   46163 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1026 01:34:50.953210   46163 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1026 01:34:50.953219   46163 command_runner.go:130] > # stream_port = "0"
	I1026 01:34:50.953228   46163 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1026 01:34:50.953236   46163 command_runner.go:130] > # stream_enable_tls = false
	I1026 01:34:50.953245   46163 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1026 01:34:50.953255   46163 command_runner.go:130] > # stream_idle_timeout = ""
	I1026 01:34:50.953266   46163 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1026 01:34:50.953279   46163 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1026 01:34:50.953287   46163 command_runner.go:130] > # minutes.
	I1026 01:34:50.953293   46163 command_runner.go:130] > # stream_tls_cert = ""
	I1026 01:34:50.953306   46163 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1026 01:34:50.953316   46163 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1026 01:34:50.953326   46163 command_runner.go:130] > # stream_tls_key = ""
	I1026 01:34:50.953336   46163 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1026 01:34:50.953348   46163 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1026 01:34:50.953367   46163 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1026 01:34:50.953377   46163 command_runner.go:130] > # stream_tls_ca = ""
	I1026 01:34:50.953390   46163 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1026 01:34:50.953400   46163 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1026 01:34:50.953411   46163 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1026 01:34:50.953433   46163 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1026 01:34:50.953447   46163 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1026 01:34:50.953459   46163 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1026 01:34:50.953468   46163 command_runner.go:130] > [crio.runtime]
	I1026 01:34:50.953478   46163 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1026 01:34:50.953489   46163 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1026 01:34:50.953496   46163 command_runner.go:130] > # "nofile=1024:2048"
	I1026 01:34:50.953506   46163 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1026 01:34:50.953516   46163 command_runner.go:130] > # default_ulimits = [
	I1026 01:34:50.953522   46163 command_runner.go:130] > # ]
	I1026 01:34:50.953534   46163 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1026 01:34:50.953543   46163 command_runner.go:130] > # no_pivot = false
	I1026 01:34:50.953552   46163 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1026 01:34:50.953561   46163 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1026 01:34:50.953566   46163 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1026 01:34:50.953582   46163 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1026 01:34:50.953593   46163 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1026 01:34:50.953603   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:34:50.953610   46163 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1026 01:34:50.953621   46163 command_runner.go:130] > # Cgroup setting for conmon
	I1026 01:34:50.953631   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1026 01:34:50.953638   46163 command_runner.go:130] > conmon_cgroup = "pod"
	I1026 01:34:50.953647   46163 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1026 01:34:50.953658   46163 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1026 01:34:50.953671   46163 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1026 01:34:50.953677   46163 command_runner.go:130] > conmon_env = [
	I1026 01:34:50.953685   46163 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1026 01:34:50.953696   46163 command_runner.go:130] > ]
	I1026 01:34:50.953704   46163 command_runner.go:130] > # Additional environment variables to set for all the
	I1026 01:34:50.953715   46163 command_runner.go:130] > # containers. These are overridden if set in the
	I1026 01:34:50.953728   46163 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1026 01:34:50.953737   46163 command_runner.go:130] > # default_env = [
	I1026 01:34:50.953743   46163 command_runner.go:130] > # ]
	I1026 01:34:50.953755   46163 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1026 01:34:50.953770   46163 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1026 01:34:50.953782   46163 command_runner.go:130] > # selinux = false
	I1026 01:34:50.953791   46163 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1026 01:34:50.953804   46163 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1026 01:34:50.953816   46163 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1026 01:34:50.953826   46163 command_runner.go:130] > # seccomp_profile = ""
	I1026 01:34:50.953834   46163 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1026 01:34:50.953845   46163 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1026 01:34:50.953859   46163 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1026 01:34:50.953869   46163 command_runner.go:130] > # which might increase security.
	I1026 01:34:50.953876   46163 command_runner.go:130] > # This option is currently deprecated,
	I1026 01:34:50.953888   46163 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1026 01:34:50.953903   46163 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1026 01:34:50.953913   46163 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1026 01:34:50.953925   46163 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1026 01:34:50.953935   46163 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1026 01:34:50.953948   46163 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1026 01:34:50.953959   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.953972   46163 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1026 01:34:50.953985   46163 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1026 01:34:50.953994   46163 command_runner.go:130] > # the cgroup blockio controller.
	I1026 01:34:50.954001   46163 command_runner.go:130] > # blockio_config_file = ""
	I1026 01:34:50.954014   46163 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1026 01:34:50.954024   46163 command_runner.go:130] > # blockio parameters.
	I1026 01:34:50.954031   46163 command_runner.go:130] > # blockio_reload = false
	I1026 01:34:50.954045   46163 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1026 01:34:50.954053   46163 command_runner.go:130] > # irqbalance daemon.
	I1026 01:34:50.954062   46163 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1026 01:34:50.954075   46163 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1026 01:34:50.954090   46163 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1026 01:34:50.954103   46163 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1026 01:34:50.954115   46163 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1026 01:34:50.954129   46163 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1026 01:34:50.954139   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.954148   46163 command_runner.go:130] > # rdt_config_file = ""
	I1026 01:34:50.954160   46163 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1026 01:34:50.954170   46163 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1026 01:34:50.954198   46163 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1026 01:34:50.954209   46163 command_runner.go:130] > # separate_pull_cgroup = ""
	I1026 01:34:50.954218   46163 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1026 01:34:50.954229   46163 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1026 01:34:50.954238   46163 command_runner.go:130] > # will be added.
	I1026 01:34:50.954246   46163 command_runner.go:130] > # default_capabilities = [
	I1026 01:34:50.954254   46163 command_runner.go:130] > # 	"CHOWN",
	I1026 01:34:50.954264   46163 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1026 01:34:50.954271   46163 command_runner.go:130] > # 	"FSETID",
	I1026 01:34:50.954281   46163 command_runner.go:130] > # 	"FOWNER",
	I1026 01:34:50.954288   46163 command_runner.go:130] > # 	"SETGID",
	I1026 01:34:50.954296   46163 command_runner.go:130] > # 	"SETUID",
	I1026 01:34:50.954303   46163 command_runner.go:130] > # 	"SETPCAP",
	I1026 01:34:50.954312   46163 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1026 01:34:50.954319   46163 command_runner.go:130] > # 	"KILL",
	I1026 01:34:50.954328   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954340   46163 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1026 01:34:50.954352   46163 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1026 01:34:50.954361   46163 command_runner.go:130] > # add_inheritable_capabilities = false
	I1026 01:34:50.954372   46163 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1026 01:34:50.954383   46163 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:34:50.954392   46163 command_runner.go:130] > default_sysctls = [
	I1026 01:34:50.954402   46163 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1026 01:34:50.954408   46163 command_runner.go:130] > ]
	I1026 01:34:50.954414   46163 command_runner.go:130] > # List of devices on the host that a
	I1026 01:34:50.954425   46163 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1026 01:34:50.954435   46163 command_runner.go:130] > # allowed_devices = [
	I1026 01:34:50.954441   46163 command_runner.go:130] > # 	"/dev/fuse",
	I1026 01:34:50.954450   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954458   46163 command_runner.go:130] > # List of additional devices. specified as
	I1026 01:34:50.954471   46163 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1026 01:34:50.954482   46163 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1026 01:34:50.954494   46163 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1026 01:34:50.954503   46163 command_runner.go:130] > # additional_devices = [
	I1026 01:34:50.954507   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954512   46163 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1026 01:34:50.954522   46163 command_runner.go:130] > # cdi_spec_dirs = [
	I1026 01:34:50.954532   46163 command_runner.go:130] > # 	"/etc/cdi",
	I1026 01:34:50.954538   46163 command_runner.go:130] > # 	"/var/run/cdi",
	I1026 01:34:50.954547   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954557   46163 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1026 01:34:50.954569   46163 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1026 01:34:50.954578   46163 command_runner.go:130] > # Defaults to false.
	I1026 01:34:50.954585   46163 command_runner.go:130] > # device_ownership_from_security_context = false
	I1026 01:34:50.954597   46163 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1026 01:34:50.954605   46163 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1026 01:34:50.954609   46163 command_runner.go:130] > # hooks_dir = [
	I1026 01:34:50.954616   46163 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1026 01:34:50.954625   46163 command_runner.go:130] > # ]
	I1026 01:34:50.954634   46163 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1026 01:34:50.954644   46163 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1026 01:34:50.954656   46163 command_runner.go:130] > # its default mounts from the following two files:
	I1026 01:34:50.954664   46163 command_runner.go:130] > #
	I1026 01:34:50.954673   46163 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1026 01:34:50.954686   46163 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1026 01:34:50.954703   46163 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1026 01:34:50.954711   46163 command_runner.go:130] > #
	I1026 01:34:50.954721   46163 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1026 01:34:50.954735   46163 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1026 01:34:50.954749   46163 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1026 01:34:50.954778   46163 command_runner.go:130] > #      only add mounts it finds in this file.
	I1026 01:34:50.954789   46163 command_runner.go:130] > #
	I1026 01:34:50.954796   46163 command_runner.go:130] > # default_mounts_file = ""
	I1026 01:34:50.954808   46163 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1026 01:34:50.954821   46163 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1026 01:34:50.954831   46163 command_runner.go:130] > pids_limit = 1024
	I1026 01:34:50.954840   46163 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1026 01:34:50.954852   46163 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1026 01:34:50.954863   46163 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1026 01:34:50.954875   46163 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1026 01:34:50.954885   46163 command_runner.go:130] > # log_size_max = -1
	I1026 01:34:50.954900   46163 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1026 01:34:50.954908   46163 command_runner.go:130] > # log_to_journald = false
	I1026 01:34:50.954917   46163 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1026 01:34:50.954925   46163 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1026 01:34:50.954935   46163 command_runner.go:130] > # Path to directory for container attach sockets.
	I1026 01:34:50.954947   46163 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1026 01:34:50.954955   46163 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1026 01:34:50.954964   46163 command_runner.go:130] > # bind_mount_prefix = ""
	I1026 01:34:50.954973   46163 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1026 01:34:50.954983   46163 command_runner.go:130] > # read_only = false
	I1026 01:34:50.954993   46163 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1026 01:34:50.955005   46163 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1026 01:34:50.955015   46163 command_runner.go:130] > # live configuration reload.
	I1026 01:34:50.955021   46163 command_runner.go:130] > # log_level = "info"
	I1026 01:34:50.955029   46163 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1026 01:34:50.955036   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.955056   46163 command_runner.go:130] > # log_filter = ""
	I1026 01:34:50.955064   46163 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1026 01:34:50.955070   46163 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1026 01:34:50.955076   46163 command_runner.go:130] > # separated by comma.
	I1026 01:34:50.955088   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955097   46163 command_runner.go:130] > # uid_mappings = ""
	I1026 01:34:50.955106   46163 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1026 01:34:50.955119   46163 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1026 01:34:50.955127   46163 command_runner.go:130] > # separated by comma.
	I1026 01:34:50.955139   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955149   46163 command_runner.go:130] > # gid_mappings = ""
	I1026 01:34:50.955158   46163 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1026 01:34:50.955169   46163 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:34:50.955185   46163 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:34:50.955200   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955210   46163 command_runner.go:130] > # minimum_mappable_uid = -1
	I1026 01:34:50.955220   46163 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1026 01:34:50.955233   46163 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1026 01:34:50.955245   46163 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1026 01:34:50.955257   46163 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1026 01:34:50.955267   46163 command_runner.go:130] > # minimum_mappable_gid = -1
	I1026 01:34:50.955280   46163 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1026 01:34:50.955291   46163 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1026 01:34:50.955303   46163 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1026 01:34:50.955313   46163 command_runner.go:130] > # ctr_stop_timeout = 30
	I1026 01:34:50.955323   46163 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1026 01:34:50.955334   46163 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1026 01:34:50.955345   46163 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1026 01:34:50.955365   46163 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1026 01:34:50.955373   46163 command_runner.go:130] > drop_infra_ctr = false
	I1026 01:34:50.955385   46163 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1026 01:34:50.955395   46163 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1026 01:34:50.955410   46163 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1026 01:34:50.955419   46163 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1026 01:34:50.955430   46163 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1026 01:34:50.955442   46163 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1026 01:34:50.955452   46163 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1026 01:34:50.955465   46163 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1026 01:34:50.955475   46163 command_runner.go:130] > # shared_cpuset = ""
	I1026 01:34:50.955483   46163 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1026 01:34:50.955490   46163 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1026 01:34:50.955495   46163 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1026 01:34:50.955505   46163 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1026 01:34:50.955515   46163 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1026 01:34:50.955523   46163 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1026 01:34:50.955537   46163 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1026 01:34:50.955546   46163 command_runner.go:130] > # enable_criu_support = false
	I1026 01:34:50.955554   46163 command_runner.go:130] > # Enable/disable the generation of the container,
	I1026 01:34:50.955575   46163 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1026 01:34:50.955585   46163 command_runner.go:130] > # enable_pod_events = false
	I1026 01:34:50.955595   46163 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:34:50.955607   46163 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1026 01:34:50.955615   46163 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1026 01:34:50.955624   46163 command_runner.go:130] > # default_runtime = "runc"
	I1026 01:34:50.955633   46163 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1026 01:34:50.955647   46163 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1026 01:34:50.955659   46163 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1026 01:34:50.955666   46163 command_runner.go:130] > # creation as a file is not desired either.
	I1026 01:34:50.955673   46163 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1026 01:34:50.955680   46163 command_runner.go:130] > # the hostname is being managed dynamically.
	I1026 01:34:50.955685   46163 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1026 01:34:50.955689   46163 command_runner.go:130] > # ]
	I1026 01:34:50.955707   46163 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1026 01:34:50.955718   46163 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1026 01:34:50.955724   46163 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1026 01:34:50.955731   46163 command_runner.go:130] > # Each entry in the table should follow the format:
	I1026 01:34:50.955734   46163 command_runner.go:130] > #
	I1026 01:34:50.955742   46163 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1026 01:34:50.955746   46163 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1026 01:34:50.955808   46163 command_runner.go:130] > # runtime_type = "oci"
	I1026 01:34:50.955821   46163 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1026 01:34:50.955825   46163 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1026 01:34:50.955830   46163 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1026 01:34:50.955834   46163 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1026 01:34:50.955840   46163 command_runner.go:130] > # monitor_env = []
	I1026 01:34:50.955847   46163 command_runner.go:130] > # privileged_without_host_devices = false
	I1026 01:34:50.955854   46163 command_runner.go:130] > # allowed_annotations = []
	I1026 01:34:50.955861   46163 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1026 01:34:50.955869   46163 command_runner.go:130] > # Where:
	I1026 01:34:50.955878   46163 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1026 01:34:50.955887   46163 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1026 01:34:50.955902   46163 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1026 01:34:50.955914   46163 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1026 01:34:50.955921   46163 command_runner.go:130] > #   in $PATH.
	I1026 01:34:50.955933   46163 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1026 01:34:50.955941   46163 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1026 01:34:50.955957   46163 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1026 01:34:50.955964   46163 command_runner.go:130] > #   state.
	I1026 01:34:50.955970   46163 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1026 01:34:50.955978   46163 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1026 01:34:50.955984   46163 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1026 01:34:50.955991   46163 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1026 01:34:50.956000   46163 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1026 01:34:50.956013   46163 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1026 01:34:50.956023   46163 command_runner.go:130] > #   The currently recognized values are:
	I1026 01:34:50.956033   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1026 01:34:50.956047   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1026 01:34:50.956059   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1026 01:34:50.956072   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1026 01:34:50.956086   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1026 01:34:50.956095   46163 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1026 01:34:50.956101   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1026 01:34:50.956109   46163 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1026 01:34:50.956117   46163 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1026 01:34:50.956125   46163 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1026 01:34:50.956129   46163 command_runner.go:130] > #   deprecated option "conmon".
	I1026 01:34:50.956137   46163 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1026 01:34:50.956149   46163 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1026 01:34:50.956161   46163 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1026 01:34:50.956172   46163 command_runner.go:130] > #   should be moved to the container's cgroup
	I1026 01:34:50.956186   46163 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1026 01:34:50.956196   46163 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1026 01:34:50.956209   46163 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1026 01:34:50.956219   46163 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1026 01:34:50.956225   46163 command_runner.go:130] > #
	I1026 01:34:50.956233   46163 command_runner.go:130] > # Using the seccomp notifier feature:
	I1026 01:34:50.956242   46163 command_runner.go:130] > #
	I1026 01:34:50.956252   46163 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1026 01:34:50.956265   46163 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1026 01:34:50.956276   46163 command_runner.go:130] > #
	I1026 01:34:50.956288   46163 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1026 01:34:50.956299   46163 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1026 01:34:50.956307   46163 command_runner.go:130] > #
	I1026 01:34:50.956320   46163 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1026 01:34:50.956330   46163 command_runner.go:130] > # feature.
	I1026 01:34:50.956335   46163 command_runner.go:130] > #
	I1026 01:34:50.956345   46163 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1026 01:34:50.956357   46163 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1026 01:34:50.956370   46163 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1026 01:34:50.956382   46163 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1026 01:34:50.956391   46163 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1026 01:34:50.956395   46163 command_runner.go:130] > #
	I1026 01:34:50.956408   46163 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1026 01:34:50.956421   46163 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1026 01:34:50.956427   46163 command_runner.go:130] > #
	I1026 01:34:50.956439   46163 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1026 01:34:50.956453   46163 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1026 01:34:50.956461   46163 command_runner.go:130] > #
	I1026 01:34:50.956471   46163 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1026 01:34:50.956485   46163 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1026 01:34:50.956495   46163 command_runner.go:130] > # limitation.
	I1026 01:34:50.956502   46163 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1026 01:34:50.956513   46163 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1026 01:34:50.956520   46163 command_runner.go:130] > runtime_type = "oci"
	I1026 01:34:50.956530   46163 command_runner.go:130] > runtime_root = "/run/runc"
	I1026 01:34:50.956537   46163 command_runner.go:130] > runtime_config_path = ""
	I1026 01:34:50.956548   46163 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1026 01:34:50.956555   46163 command_runner.go:130] > monitor_cgroup = "pod"
	I1026 01:34:50.956565   46163 command_runner.go:130] > monitor_exec_cgroup = ""
	I1026 01:34:50.956571   46163 command_runner.go:130] > monitor_env = [
	I1026 01:34:50.956582   46163 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1026 01:34:50.956585   46163 command_runner.go:130] > ]
	I1026 01:34:50.956591   46163 command_runner.go:130] > privileged_without_host_devices = false
	I1026 01:34:50.956604   46163 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1026 01:34:50.956616   46163 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1026 01:34:50.956626   46163 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1026 01:34:50.956641   46163 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1026 01:34:50.956656   46163 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1026 01:34:50.956667   46163 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1026 01:34:50.956686   46163 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1026 01:34:50.956698   46163 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1026 01:34:50.956708   46163 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1026 01:34:50.956723   46163 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1026 01:34:50.956733   46163 command_runner.go:130] > # Example:
	I1026 01:34:50.956742   46163 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1026 01:34:50.956752   46163 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1026 01:34:50.956763   46163 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1026 01:34:50.956774   46163 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1026 01:34:50.956782   46163 command_runner.go:130] > # cpuset = 0
	I1026 01:34:50.956792   46163 command_runner.go:130] > # cpushares = "0-1"
	I1026 01:34:50.956799   46163 command_runner.go:130] > # Where:
	I1026 01:34:50.956807   46163 command_runner.go:130] > # The workload name is workload-type.
	I1026 01:34:50.956822   46163 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1026 01:34:50.956834   46163 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1026 01:34:50.956846   46163 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1026 01:34:50.956861   46163 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1026 01:34:50.956873   46163 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1026 01:34:50.956884   46163 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1026 01:34:50.956898   46163 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1026 01:34:50.956910   46163 command_runner.go:130] > # Default value is set to true
	I1026 01:34:50.956917   46163 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1026 01:34:50.956926   46163 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1026 01:34:50.956934   46163 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1026 01:34:50.956941   46163 command_runner.go:130] > # Default value is set to 'false'
	I1026 01:34:50.956951   46163 command_runner.go:130] > # disable_hostport_mapping = false
	I1026 01:34:50.956961   46163 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1026 01:34:50.956966   46163 command_runner.go:130] > #
	I1026 01:34:50.956972   46163 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1026 01:34:50.956980   46163 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1026 01:34:50.956990   46163 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1026 01:34:50.957001   46163 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1026 01:34:50.957009   46163 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1026 01:34:50.957014   46163 command_runner.go:130] > [crio.image]
	I1026 01:34:50.957022   46163 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1026 01:34:50.957029   46163 command_runner.go:130] > # default_transport = "docker://"
	I1026 01:34:50.957041   46163 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1026 01:34:50.957051   46163 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:34:50.957058   46163 command_runner.go:130] > # global_auth_file = ""
	I1026 01:34:50.957066   46163 command_runner.go:130] > # The image used to instantiate infra containers.
	I1026 01:34:50.957073   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.957080   46163 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1026 01:34:50.957090   46163 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1026 01:34:50.957107   46163 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1026 01:34:50.957116   46163 command_runner.go:130] > # This option supports live configuration reload.
	I1026 01:34:50.957123   46163 command_runner.go:130] > # pause_image_auth_file = ""
	I1026 01:34:50.957132   46163 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1026 01:34:50.957142   46163 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1026 01:34:50.957152   46163 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1026 01:34:50.957160   46163 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1026 01:34:50.957167   46163 command_runner.go:130] > # pause_command = "/pause"
	I1026 01:34:50.957178   46163 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1026 01:34:50.957187   46163 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1026 01:34:50.957201   46163 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1026 01:34:50.957210   46163 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1026 01:34:50.957219   46163 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1026 01:34:50.957228   46163 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1026 01:34:50.957235   46163 command_runner.go:130] > # pinned_images = [
	I1026 01:34:50.957240   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957251   46163 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1026 01:34:50.957265   46163 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1026 01:34:50.957279   46163 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1026 01:34:50.957291   46163 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1026 01:34:50.957303   46163 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1026 01:34:50.957312   46163 command_runner.go:130] > # signature_policy = ""
	I1026 01:34:50.957321   46163 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1026 01:34:50.957333   46163 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1026 01:34:50.957339   46163 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1026 01:34:50.957348   46163 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1026 01:34:50.957354   46163 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1026 01:34:50.957360   46163 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1026 01:34:50.957369   46163 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1026 01:34:50.957377   46163 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1026 01:34:50.957381   46163 command_runner.go:130] > # changing them here.
	I1026 01:34:50.957387   46163 command_runner.go:130] > # insecure_registries = [
	I1026 01:34:50.957390   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957403   46163 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1026 01:34:50.957412   46163 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1026 01:34:50.957429   46163 command_runner.go:130] > # image_volumes = "mkdir"
	I1026 01:34:50.957437   46163 command_runner.go:130] > # Temporary directory to use for storing big files
	I1026 01:34:50.957448   46163 command_runner.go:130] > # big_files_temporary_dir = ""
	I1026 01:34:50.957458   46163 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1026 01:34:50.957467   46163 command_runner.go:130] > # CNI plugins.
	I1026 01:34:50.957474   46163 command_runner.go:130] > [crio.network]
	I1026 01:34:50.957485   46163 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1026 01:34:50.957497   46163 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1026 01:34:50.957505   46163 command_runner.go:130] > # cni_default_network = ""
	I1026 01:34:50.957513   46163 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1026 01:34:50.957518   46163 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1026 01:34:50.957525   46163 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1026 01:34:50.957532   46163 command_runner.go:130] > # plugin_dirs = [
	I1026 01:34:50.957536   46163 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1026 01:34:50.957541   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957547   46163 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1026 01:34:50.957553   46163 command_runner.go:130] > [crio.metrics]
	I1026 01:34:50.957557   46163 command_runner.go:130] > # Globally enable or disable metrics support.
	I1026 01:34:50.957564   46163 command_runner.go:130] > enable_metrics = true
	I1026 01:34:50.957568   46163 command_runner.go:130] > # Specify enabled metrics collectors.
	I1026 01:34:50.957575   46163 command_runner.go:130] > # Per default all metrics are enabled.
	I1026 01:34:50.957581   46163 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1026 01:34:50.957589   46163 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1026 01:34:50.957595   46163 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1026 01:34:50.957601   46163 command_runner.go:130] > # metrics_collectors = [
	I1026 01:34:50.957605   46163 command_runner.go:130] > # 	"operations",
	I1026 01:34:50.957612   46163 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1026 01:34:50.957617   46163 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1026 01:34:50.957621   46163 command_runner.go:130] > # 	"operations_errors",
	I1026 01:34:50.957627   46163 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1026 01:34:50.957631   46163 command_runner.go:130] > # 	"image_pulls_by_name",
	I1026 01:34:50.957646   46163 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1026 01:34:50.957653   46163 command_runner.go:130] > # 	"image_pulls_failures",
	I1026 01:34:50.957657   46163 command_runner.go:130] > # 	"image_pulls_successes",
	I1026 01:34:50.957663   46163 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1026 01:34:50.957667   46163 command_runner.go:130] > # 	"image_layer_reuse",
	I1026 01:34:50.957674   46163 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1026 01:34:50.957680   46163 command_runner.go:130] > # 	"containers_oom_total",
	I1026 01:34:50.957687   46163 command_runner.go:130] > # 	"containers_oom",
	I1026 01:34:50.957691   46163 command_runner.go:130] > # 	"processes_defunct",
	I1026 01:34:50.957697   46163 command_runner.go:130] > # 	"operations_total",
	I1026 01:34:50.957701   46163 command_runner.go:130] > # 	"operations_latency_seconds",
	I1026 01:34:50.957708   46163 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1026 01:34:50.957712   46163 command_runner.go:130] > # 	"operations_errors_total",
	I1026 01:34:50.957716   46163 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1026 01:34:50.957723   46163 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1026 01:34:50.957727   46163 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1026 01:34:50.957733   46163 command_runner.go:130] > # 	"image_pulls_success_total",
	I1026 01:34:50.957738   46163 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1026 01:34:50.957744   46163 command_runner.go:130] > # 	"containers_oom_count_total",
	I1026 01:34:50.957748   46163 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1026 01:34:50.957754   46163 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1026 01:34:50.957758   46163 command_runner.go:130] > # ]
	I1026 01:34:50.957765   46163 command_runner.go:130] > # The port on which the metrics server will listen.
	I1026 01:34:50.957769   46163 command_runner.go:130] > # metrics_port = 9090
	I1026 01:34:50.957776   46163 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1026 01:34:50.957780   46163 command_runner.go:130] > # metrics_socket = ""
	I1026 01:34:50.957787   46163 command_runner.go:130] > # The certificate for the secure metrics server.
	I1026 01:34:50.957793   46163 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1026 01:34:50.957801   46163 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1026 01:34:50.957805   46163 command_runner.go:130] > # certificate on any modification event.
	I1026 01:34:50.957812   46163 command_runner.go:130] > # metrics_cert = ""
	I1026 01:34:50.957821   46163 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1026 01:34:50.957832   46163 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1026 01:34:50.957845   46163 command_runner.go:130] > # metrics_key = ""
	I1026 01:34:50.957853   46163 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1026 01:34:50.957860   46163 command_runner.go:130] > [crio.tracing]
	I1026 01:34:50.957865   46163 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1026 01:34:50.957871   46163 command_runner.go:130] > # enable_tracing = false
	I1026 01:34:50.957876   46163 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1026 01:34:50.957884   46163 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1026 01:34:50.957894   46163 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1026 01:34:50.957901   46163 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1026 01:34:50.957905   46163 command_runner.go:130] > # CRI-O NRI configuration.
	I1026 01:34:50.957911   46163 command_runner.go:130] > [crio.nri]
	I1026 01:34:50.957917   46163 command_runner.go:130] > # Globally enable or disable NRI.
	I1026 01:34:50.957923   46163 command_runner.go:130] > # enable_nri = false
	I1026 01:34:50.957927   46163 command_runner.go:130] > # NRI socket to listen on.
	I1026 01:34:50.957932   46163 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1026 01:34:50.957938   46163 command_runner.go:130] > # NRI plugin directory to use.
	I1026 01:34:50.957943   46163 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1026 01:34:50.957952   46163 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1026 01:34:50.957959   46163 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1026 01:34:50.957964   46163 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1026 01:34:50.957970   46163 command_runner.go:130] > # nri_disable_connections = false
	I1026 01:34:50.957976   46163 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1026 01:34:50.957982   46163 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1026 01:34:50.957987   46163 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1026 01:34:50.957994   46163 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1026 01:34:50.958000   46163 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1026 01:34:50.958006   46163 command_runner.go:130] > [crio.stats]
	I1026 01:34:50.958012   46163 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1026 01:34:50.958018   46163 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1026 01:34:50.958022   46163 command_runner.go:130] > # stats_collection_period = 0
	I1026 01:34:50.958668   46163 command_runner.go:130] ! time="2024-10-26 01:34:50.914024361Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1026 01:34:50.958695   46163 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1026 01:34:50.958769   46163 cni.go:84] Creating CNI manager for ""
	I1026 01:34:50.958783   46163 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1026 01:34:50.958795   46163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:34:50.958820   46163 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-328488 NodeName:multinode-328488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:34:50.958987   46163 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-328488"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.35"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:34:50.959061   46163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:34:50.968732   46163 command_runner.go:130] > kubeadm
	I1026 01:34:50.968754   46163 command_runner.go:130] > kubectl
	I1026 01:34:50.968758   46163 command_runner.go:130] > kubelet
	I1026 01:34:50.968796   46163 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:34:50.968842   46163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:34:50.977796   46163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1026 01:34:50.994030   46163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:34:51.009814   46163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 01:34:51.025838   46163 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I1026 01:34:51.029484   46163 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I1026 01:34:51.029670   46163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:34:51.162883   46163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:34:51.177166   46163 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488 for IP: 192.168.39.35
	I1026 01:34:51.177186   46163 certs.go:194] generating shared ca certs ...
	I1026 01:34:51.177201   46163 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:34:51.177351   46163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:34:51.177391   46163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:34:51.177401   46163 certs.go:256] generating profile certs ...
	I1026 01:34:51.177510   46163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/client.key
	I1026 01:34:51.177568   46163 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key.6d521543
	I1026 01:34:51.177605   46163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key
	I1026 01:34:51.177618   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1026 01:34:51.177634   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1026 01:34:51.177648   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1026 01:34:51.177661   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1026 01:34:51.177673   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1026 01:34:51.177685   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1026 01:34:51.177697   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1026 01:34:51.177709   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1026 01:34:51.177762   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:34:51.177795   46163 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:34:51.177809   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:34:51.177835   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:34:51.177857   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:34:51.177889   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:34:51.177926   46163 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:34:51.177952   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.177965   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.177981   46163 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem -> /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.178545   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:34:51.203015   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:34:51.226538   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:34:51.250438   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:34:51.274389   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 01:34:51.299439   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:34:51.346100   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:34:51.370073   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/multinode-328488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:34:51.394061   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:34:51.417913   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:34:51.441314   46163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:34:51.464922   46163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:34:51.480778   46163 ssh_runner.go:195] Run: openssl version
	I1026 01:34:51.486390   46163 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1026 01:34:51.486469   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:34:51.497463   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501568   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501885   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.501935   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:34:51.507271   46163 command_runner.go:130] > b5213941
	I1026 01:34:51.507341   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:34:51.516664   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:34:51.527486   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.531729   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.532095   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.532152   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:34:51.537534   46163 command_runner.go:130] > 51391683
	I1026 01:34:51.537676   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:34:51.546317   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:34:51.556109   46163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560060   46163 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560219   46163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.560272   46163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:34:51.565302   46163 command_runner.go:130] > 3ec20f2e
	I1026 01:34:51.565482   46163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:34:51.573828   46163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:34:51.577649   46163 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:34:51.577663   46163 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1026 01:34:51.577668   46163 command_runner.go:130] > Device: 253,1	Inode: 6291502     Links: 1
	I1026 01:34:51.577674   46163 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1026 01:34:51.577680   46163 command_runner.go:130] > Access: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577685   46163 command_runner.go:130] > Modify: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577696   46163 command_runner.go:130] > Change: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577700   46163 command_runner.go:130] >  Birth: 2024-10-26 01:28:12.975968770 +0000
	I1026 01:34:51.577800   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:34:51.582975   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.583034   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:34:51.588009   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.588055   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:34:51.593006   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.593188   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:34:51.598162   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.598355   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:34:51.603317   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.603465   46163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:34:51.608609   46163 command_runner.go:130] > Certificate will not expire
	I1026 01:34:51.608854   46163 kubeadm.go:392] StartCluster: {Name:multinode-328488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-328488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:34:51.608964   46163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:34:51.609011   46163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:34:51.646979   46163 command_runner.go:130] > bc4e75868442bf940a705867036e2ae4099bb4db1651bf216fc502215b9c239d
	I1026 01:34:51.647006   46163 command_runner.go:130] > 95e559f893c174aa8b66984700b8cdaaeda1b662d69a8ff15021f775acd0671d
	I1026 01:34:51.647013   46163 command_runner.go:130] > e2187ce2d5e839cc3f2fa0ef2721c1dbfd167077c759594e9360e922c1d1100b
	I1026 01:34:51.647021   46163 command_runner.go:130] > de93b49883e4242d219cb67055a43628d428e32ab41baaf696c90993c288beea
	I1026 01:34:51.647093   46163 command_runner.go:130] > 3711c0271da051688fa1322358cd58eab86e9565d5a5961679a354d1d7de91bb
	I1026 01:34:51.647176   46163 command_runner.go:130] > 85f818a23be263dec89ee672e9a595a013940a7113d2587d88e63822d37824b9
	I1026 01:34:51.647244   46163 command_runner.go:130] > ea1ec21d25070478483636ee683170416b5266b38d0dcf7ba88c253fa585e905
	I1026 01:34:51.647342   46163 command_runner.go:130] > 810643c0c723504a6ccb55d66d2d93c6cb55373974a5ce23ee716c5689169b6d
	I1026 01:34:51.649116   46163 cri.go:89] found id: "bc4e75868442bf940a705867036e2ae4099bb4db1651bf216fc502215b9c239d"
	I1026 01:34:51.649131   46163 cri.go:89] found id: "95e559f893c174aa8b66984700b8cdaaeda1b662d69a8ff15021f775acd0671d"
	I1026 01:34:51.649135   46163 cri.go:89] found id: "e2187ce2d5e839cc3f2fa0ef2721c1dbfd167077c759594e9360e922c1d1100b"
	I1026 01:34:51.649139   46163 cri.go:89] found id: "de93b49883e4242d219cb67055a43628d428e32ab41baaf696c90993c288beea"
	I1026 01:34:51.649142   46163 cri.go:89] found id: "3711c0271da051688fa1322358cd58eab86e9565d5a5961679a354d1d7de91bb"
	I1026 01:34:51.649145   46163 cri.go:89] found id: "85f818a23be263dec89ee672e9a595a013940a7113d2587d88e63822d37824b9"
	I1026 01:34:51.649147   46163 cri.go:89] found id: "ea1ec21d25070478483636ee683170416b5266b38d0dcf7ba88c253fa585e905"
	I1026 01:34:51.649150   46163 cri.go:89] found id: "810643c0c723504a6ccb55d66d2d93c6cb55373974a5ce23ee716c5689169b6d"
	I1026 01:34:51.649152   46163 cri.go:89] found id: ""
	I1026 01:34:51.649193   46163 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-328488 -n multinode-328488
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-328488 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.34s)

                                                
                                    
x
+
TestPreload (269.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-930428 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1026 01:43:52.960947   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-930428 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.522367919s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-930428 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-930428 image pull gcr.io/k8s-minikube/busybox: (3.158067132s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-930428
E1026 01:46:20.355271   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:46:37.285669   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-930428: exit status 82 (2m0.465857978s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-930428"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-930428 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-26 01:47:24.06567666 +0000 UTC m=+3858.833439853
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-930428 -n test-preload-930428
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-930428 -n test-preload-930428: exit status 3 (18.576108685s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:47:42.637843   51041 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E1026 01:47:42.637868   51041 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-930428" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-930428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-930428
--- FAIL: TestPreload (269.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (1175.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m7.839453055s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-970804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-970804" primary control-plane node in "kubernetes-upgrade-970804" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:51:47.505970   54330 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:51:47.506079   54330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:51:47.506088   54330 out.go:358] Setting ErrFile to fd 2...
	I1026 01:51:47.506093   54330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:51:47.506328   54330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:51:47.506922   54330 out.go:352] Setting JSON to false
	I1026 01:51:47.507902   54330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5647,"bootTime":1729901860,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:51:47.508004   54330 start.go:139] virtualization: kvm guest
	I1026 01:51:47.510167   54330 out.go:177] * [kubernetes-upgrade-970804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:51:47.511560   54330 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:51:47.511562   54330 notify.go:220] Checking for updates...
	I1026 01:51:47.512825   54330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:51:47.514219   54330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:51:47.515599   54330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:51:47.516936   54330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:51:47.518139   54330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:51:47.519678   54330 config.go:182] Loaded profile config "NoKubernetes-694381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1026 01:51:47.519763   54330 config.go:182] Loaded profile config "cert-expiration-999717": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:51:47.519844   54330 config.go:182] Loaded profile config "running-upgrade-061004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1026 01:51:47.519930   54330 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:51:47.555555   54330 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 01:51:47.556775   54330 start.go:297] selected driver: kvm2
	I1026 01:51:47.556785   54330 start.go:901] validating driver "kvm2" against <nil>
	I1026 01:51:47.556796   54330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:51:47.557501   54330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:51:47.557573   54330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:51:47.572868   54330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:51:47.572910   54330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 01:51:47.573177   54330 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 01:51:47.573204   54330 cni.go:84] Creating CNI manager for ""
	I1026 01:51:47.573257   54330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:51:47.573271   54330 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 01:51:47.573334   54330 start.go:340] cluster config:
	{Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:51:47.573508   54330 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:51:47.575158   54330 out.go:177] * Starting "kubernetes-upgrade-970804" primary control-plane node in "kubernetes-upgrade-970804" cluster
	I1026 01:51:47.576365   54330 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 01:51:47.576395   54330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1026 01:51:47.576405   54330 cache.go:56] Caching tarball of preloaded images
	I1026 01:51:47.576493   54330 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:51:47.576507   54330 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1026 01:51:47.576620   54330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/config.json ...
	I1026 01:51:47.576641   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/config.json: {Name:mk941aa38ac5c00d13ce09c2ba394f2b943083cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:51:47.576798   54330 start.go:360] acquireMachinesLock for kubernetes-upgrade-970804: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:52:22.496108   54330 start.go:364] duration metric: took 34.91928248s to acquireMachinesLock for "kubernetes-upgrade-970804"
	I1026 01:52:22.496186   54330 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:52:22.496310   54330 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 01:52:22.497566   54330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:52:22.497802   54330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:52:22.497869   54330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:52:22.518345   54330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I1026 01:52:22.518796   54330 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:52:22.519518   54330 main.go:141] libmachine: Using API Version  1
	I1026 01:52:22.519564   54330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:52:22.519929   54330 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:52:22.520129   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:52:22.520295   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:22.520451   54330 start.go:159] libmachine.API.Create for "kubernetes-upgrade-970804" (driver="kvm2")
	I1026 01:52:22.520486   54330 client.go:168] LocalClient.Create starting
	I1026 01:52:22.520522   54330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:52:22.520562   54330 main.go:141] libmachine: Decoding PEM data...
	I1026 01:52:22.520588   54330 main.go:141] libmachine: Parsing certificate...
	I1026 01:52:22.520659   54330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:52:22.520690   54330 main.go:141] libmachine: Decoding PEM data...
	I1026 01:52:22.520706   54330 main.go:141] libmachine: Parsing certificate...
	I1026 01:52:22.520740   54330 main.go:141] libmachine: Running pre-create checks...
	I1026 01:52:22.520751   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .PreCreateCheck
	I1026 01:52:22.521200   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetConfigRaw
	I1026 01:52:22.521669   54330 main.go:141] libmachine: Creating machine...
	I1026 01:52:22.521685   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .Create
	I1026 01:52:22.521816   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Creating KVM machine...
	I1026 01:52:22.523284   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found existing default KVM network
	I1026 01:52:22.524750   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.524569   54660 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:c5:c7} reservation:<nil>}
	I1026 01:52:22.526026   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.525948   54660 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:b7:10} reservation:<nil>}
	I1026 01:52:22.526983   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.526890   54660 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:ce:c5} reservation:<nil>}
	I1026 01:52:22.528062   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.527977   54660 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000325a00}
	I1026 01:52:22.528087   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | created network xml: 
	I1026 01:52:22.528105   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | <network>
	I1026 01:52:22.528120   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   <name>mk-kubernetes-upgrade-970804</name>
	I1026 01:52:22.528132   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   <dns enable='no'/>
	I1026 01:52:22.528141   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   
	I1026 01:52:22.528147   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1026 01:52:22.528156   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |     <dhcp>
	I1026 01:52:22.528168   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1026 01:52:22.528176   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |     </dhcp>
	I1026 01:52:22.528180   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   </ip>
	I1026 01:52:22.528189   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG |   
	I1026 01:52:22.528196   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | </network>
	I1026 01:52:22.528202   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | 
	I1026 01:52:22.533909   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | trying to create private KVM network mk-kubernetes-upgrade-970804 192.168.72.0/24...
	I1026 01:52:22.601002   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | private KVM network mk-kubernetes-upgrade-970804 192.168.72.0/24 created
	I1026 01:52:22.601043   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.600957   54660 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:52:22.601076   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804 ...
	I1026 01:52:22.601089   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:52:22.601112   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:52:22.861197   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.861081   54660 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa...
	I1026 01:52:22.997252   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.997090   54660 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/kubernetes-upgrade-970804.rawdisk...
	I1026 01:52:22.997287   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Writing magic tar header
	I1026 01:52:22.997314   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Writing SSH key tar header
	I1026 01:52:22.997328   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:22.997264   54660 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804 ...
	I1026 01:52:22.997445   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804
	I1026 01:52:22.997468   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:52:22.997477   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804 (perms=drwx------)
	I1026 01:52:22.997495   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:52:22.997504   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:52:22.997516   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:52:22.997529   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:52:22.997545   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:52:22.997559   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Creating domain...
	I1026 01:52:22.997573   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:52:22.997587   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:52:22.997596   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:52:22.997606   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:52:22.997620   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Checking permissions on dir: /home
	I1026 01:52:22.997631   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Skipping /home - not owner
	I1026 01:52:22.998766   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) define libvirt domain using xml: 
	I1026 01:52:22.998789   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) <domain type='kvm'>
	I1026 01:52:22.998799   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <name>kubernetes-upgrade-970804</name>
	I1026 01:52:22.998812   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <memory unit='MiB'>2200</memory>
	I1026 01:52:22.998822   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <vcpu>2</vcpu>
	I1026 01:52:22.998829   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <features>
	I1026 01:52:22.998860   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <acpi/>
	I1026 01:52:22.998870   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <apic/>
	I1026 01:52:22.998879   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <pae/>
	I1026 01:52:22.998892   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     
	I1026 01:52:22.998911   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   </features>
	I1026 01:52:22.998922   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <cpu mode='host-passthrough'>
	I1026 01:52:22.998930   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   
	I1026 01:52:22.998940   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   </cpu>
	I1026 01:52:22.998947   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <os>
	I1026 01:52:22.998954   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <type>hvm</type>
	I1026 01:52:22.999013   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <boot dev='cdrom'/>
	I1026 01:52:22.999034   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <boot dev='hd'/>
	I1026 01:52:22.999069   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <bootmenu enable='no'/>
	I1026 01:52:22.999096   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   </os>
	I1026 01:52:22.999112   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   <devices>
	I1026 01:52:22.999132   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <disk type='file' device='cdrom'>
	I1026 01:52:22.999150   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/boot2docker.iso'/>
	I1026 01:52:22.999161   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <target dev='hdc' bus='scsi'/>
	I1026 01:52:22.999169   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <readonly/>
	I1026 01:52:22.999178   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </disk>
	I1026 01:52:22.999275   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <disk type='file' device='disk'>
	I1026 01:52:22.999297   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:52:22.999307   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/kubernetes-upgrade-970804.rawdisk'/>
	I1026 01:52:22.999315   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <target dev='hda' bus='virtio'/>
	I1026 01:52:22.999320   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </disk>
	I1026 01:52:22.999326   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <interface type='network'>
	I1026 01:52:22.999334   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <source network='mk-kubernetes-upgrade-970804'/>
	I1026 01:52:22.999349   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <model type='virtio'/>
	I1026 01:52:22.999381   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </interface>
	I1026 01:52:22.999396   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <interface type='network'>
	I1026 01:52:22.999408   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <source network='default'/>
	I1026 01:52:22.999420   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <model type='virtio'/>
	I1026 01:52:22.999431   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </interface>
	I1026 01:52:22.999441   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <serial type='pty'>
	I1026 01:52:22.999456   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <target port='0'/>
	I1026 01:52:22.999468   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </serial>
	I1026 01:52:22.999479   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <console type='pty'>
	I1026 01:52:22.999493   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <target type='serial' port='0'/>
	I1026 01:52:22.999503   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </console>
	I1026 01:52:22.999516   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     <rng model='virtio'>
	I1026 01:52:22.999531   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)       <backend model='random'>/dev/random</backend>
	I1026 01:52:22.999544   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     </rng>
	I1026 01:52:22.999554   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     
	I1026 01:52:22.999572   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)     
	I1026 01:52:22.999582   54330 main.go:141] libmachine: (kubernetes-upgrade-970804)   </devices>
	I1026 01:52:22.999592   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) </domain>
	I1026 01:52:22.999607   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) 
	I1026 01:52:23.006938   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:d8:81:bf in network default
	I1026 01:52:23.007682   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Ensuring networks are active...
	I1026 01:52:23.007712   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:23.008524   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Ensuring network default is active
	I1026 01:52:23.008839   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Ensuring network mk-kubernetes-upgrade-970804 is active
	I1026 01:52:23.009368   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Getting domain xml...
	I1026 01:52:23.010211   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Creating domain...
	I1026 01:52:24.355307   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Waiting to get IP...
	I1026 01:52:24.356237   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.356745   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.356774   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:24.356725   54660 retry.go:31] will retry after 189.777548ms: waiting for machine to come up
	I1026 01:52:24.548298   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.548904   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.548927   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:24.548872   54660 retry.go:31] will retry after 251.996144ms: waiting for machine to come up
	I1026 01:52:24.802364   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.802817   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:24.802846   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:24.802769   54660 retry.go:31] will retry after 305.523394ms: waiting for machine to come up
	I1026 01:52:25.109489   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:25.109977   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:25.110002   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:25.109934   54660 retry.go:31] will retry after 453.108049ms: waiting for machine to come up
	I1026 01:52:25.565648   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:25.566143   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:25.566172   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:25.566094   54660 retry.go:31] will retry after 642.804064ms: waiting for machine to come up
	I1026 01:52:26.210939   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:26.211457   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:26.211502   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:26.211425   54660 retry.go:31] will retry after 603.891199ms: waiting for machine to come up
	I1026 01:52:26.817227   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:26.817775   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:26.817802   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:26.817725   54660 retry.go:31] will retry after 896.293866ms: waiting for machine to come up
	I1026 01:52:27.715556   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:27.716129   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:27.716159   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:27.716059   54660 retry.go:31] will retry after 1.278380131s: waiting for machine to come up
	I1026 01:52:28.996667   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:28.997223   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:28.997253   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:28.997163   54660 retry.go:31] will retry after 1.197645117s: waiting for machine to come up
	I1026 01:52:30.196343   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:30.196711   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:30.196741   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:30.196659   54660 retry.go:31] will retry after 2.116442274s: waiting for machine to come up
	I1026 01:52:32.314591   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:32.315016   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:32.315042   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:32.314978   54660 retry.go:31] will retry after 2.000252836s: waiting for machine to come up
	I1026 01:52:34.317136   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:34.317696   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:34.317721   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:34.317633   54660 retry.go:31] will retry after 2.338607834s: waiting for machine to come up
	I1026 01:52:36.659022   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:36.659460   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:36.659489   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:36.659407   54660 retry.go:31] will retry after 3.480146155s: waiting for machine to come up
	I1026 01:52:40.141146   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:40.141677   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find current IP address of domain kubernetes-upgrade-970804 in network mk-kubernetes-upgrade-970804
	I1026 01:52:40.141698   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | I1026 01:52:40.141636   54660 retry.go:31] will retry after 5.422138614s: waiting for machine to come up
	I1026 01:52:45.566355   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:45.566785   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has current primary IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:45.566809   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Found IP for machine: 192.168.72.48
	I1026 01:52:45.566857   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Reserving static IP address...
	I1026 01:52:45.567172   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-970804", mac: "52:54:00:33:51:fe", ip: "192.168.72.48"} in network mk-kubernetes-upgrade-970804
	I1026 01:52:45.642160   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Reserved static IP address: 192.168.72.48
	I1026 01:52:45.642191   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Getting to WaitForSSH function...
	I1026 01:52:45.642200   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Waiting for SSH to be available...
	I1026 01:52:45.644949   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:45.645273   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804
	I1026 01:52:45.645302   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-970804 interface with MAC address 52:54:00:33:51:fe
	I1026 01:52:45.645440   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Using SSH client type: external
	I1026 01:52:45.645465   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa (-rw-------)
	I1026 01:52:45.645541   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:52:45.645575   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | About to run SSH command:
	I1026 01:52:45.645599   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | exit 0
	I1026 01:52:45.649913   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | SSH cmd err, output: exit status 255: 
	I1026 01:52:45.649937   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1026 01:52:45.649947   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | command : exit 0
	I1026 01:52:45.649954   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | err     : exit status 255
	I1026 01:52:45.649965   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | output  : 
	I1026 01:52:48.650120   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Getting to WaitForSSH function...
	I1026 01:52:48.652927   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.653326   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:48.653364   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.653467   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Using SSH client type: external
	I1026 01:52:48.653502   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa (-rw-------)
	I1026 01:52:48.653548   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:52:48.653566   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | About to run SSH command:
	I1026 01:52:48.653578   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | exit 0
	I1026 01:52:48.777524   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | SSH cmd err, output: <nil>: 
	I1026 01:52:48.777793   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) KVM machine creation complete!
	I1026 01:52:48.778155   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetConfigRaw
	I1026 01:52:48.778844   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:48.779014   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:48.779149   54330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:52:48.779164   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetState
	I1026 01:52:48.780455   54330 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:52:48.780467   54330 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:52:48.780472   54330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:52:48.780478   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:48.782788   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.783105   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:48.783134   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.783256   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:48.783426   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:48.783566   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:48.783697   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:48.783850   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:48.784032   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:48.784042   54330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:52:48.888746   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:52:48.888769   54330 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:52:48.888778   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:48.891612   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.892026   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:48.892060   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:48.892175   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:48.892390   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:48.892570   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:48.892707   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:48.892874   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:48.893055   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:48.893066   54330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:52:49.005830   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:52:49.005927   54330 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:52:49.005940   54330 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:52:49.005948   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:52:49.006179   54330 buildroot.go:166] provisioning hostname "kubernetes-upgrade-970804"
	I1026 01:52:49.006206   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:52:49.006421   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.008820   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.009180   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.009217   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.009339   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:49.009523   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.009664   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.009778   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:49.009921   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:49.010079   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:49.010089   54330 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-970804 && echo "kubernetes-upgrade-970804" | sudo tee /etc/hostname
	I1026 01:52:49.130231   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-970804
	
	I1026 01:52:49.130257   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.132883   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.133208   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.133242   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.133354   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:49.133553   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.133710   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.133843   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:49.133995   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:49.134161   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:49.134176   54330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-970804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-970804/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-970804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:52:49.251476   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:52:49.251503   54330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:52:49.251528   54330 buildroot.go:174] setting up certificates
	I1026 01:52:49.251541   54330 provision.go:84] configureAuth start
	I1026 01:52:49.251556   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:52:49.251853   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:52:49.254548   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.254871   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.254901   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.255022   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.257280   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.257688   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.257735   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.257795   54330 provision.go:143] copyHostCerts
	I1026 01:52:49.257860   54330 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:52:49.257873   54330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:52:49.257930   54330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:52:49.258049   54330 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:52:49.258060   54330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:52:49.258081   54330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:52:49.258130   54330 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:52:49.258137   54330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:52:49.258157   54330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:52:49.258200   54330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-970804 san=[127.0.0.1 192.168.72.48 kubernetes-upgrade-970804 localhost minikube]
	I1026 01:52:49.601369   54330 provision.go:177] copyRemoteCerts
	I1026 01:52:49.601458   54330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:52:49.601484   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.604279   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.604635   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.604672   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.604786   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:49.604962   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.605120   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:49.605245   54330 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:52:49.691544   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:52:49.715644   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 01:52:49.738882   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:52:49.761656   54330 provision.go:87] duration metric: took 510.099364ms to configureAuth
	I1026 01:52:49.761683   54330 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:52:49.762027   54330 config.go:182] Loaded profile config "kubernetes-upgrade-970804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 01:52:49.762122   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.764788   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.765106   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.765140   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.765271   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:49.765483   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.765636   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.765803   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:49.765968   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:49.766122   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:49.766136   54330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:52:49.990303   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:52:49.990334   54330 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:52:49.990343   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetURL
	I1026 01:52:49.991556   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | Using libvirt version 6000000
	I1026 01:52:49.993670   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.994071   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.994102   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.994227   54330 main.go:141] libmachine: Docker is up and running!
	I1026 01:52:49.994247   54330 main.go:141] libmachine: Reticulating splines...
	I1026 01:52:49.994256   54330 client.go:171] duration metric: took 27.473757927s to LocalClient.Create
	I1026 01:52:49.994282   54330 start.go:167] duration metric: took 27.473832472s to libmachine.API.Create "kubernetes-upgrade-970804"
	I1026 01:52:49.994296   54330 start.go:293] postStartSetup for "kubernetes-upgrade-970804" (driver="kvm2")
	I1026 01:52:49.994320   54330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:52:49.994351   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:49.994569   54330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:52:49.994592   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:49.996985   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.997255   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:49.997275   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:49.997465   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:49.997658   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:49.997809   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:49.997964   54330 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:52:50.079221   54330 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:52:50.083373   54330 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:52:50.083396   54330 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:52:50.083486   54330 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:52:50.083588   54330 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:52:50.083728   54330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:52:50.093078   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:52:50.114950   54330 start.go:296] duration metric: took 120.633807ms for postStartSetup
	I1026 01:52:50.115024   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetConfigRaw
	I1026 01:52:50.115644   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:52:50.118208   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.118597   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:50.118633   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.118826   54330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/config.json ...
	I1026 01:52:50.119022   54330 start.go:128] duration metric: took 27.622700229s to createHost
	I1026 01:52:50.119043   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:50.122064   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.122495   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:50.122522   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.122691   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:50.122878   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:50.123025   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:50.123157   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:50.123343   54330 main.go:141] libmachine: Using SSH client type: native
	I1026 01:52:50.123534   54330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:52:50.123549   54330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:52:50.233888   54330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729907570.206268783
	
	I1026 01:52:50.233919   54330 fix.go:216] guest clock: 1729907570.206268783
	I1026 01:52:50.233927   54330 fix.go:229] Guest: 2024-10-26 01:52:50.206268783 +0000 UTC Remote: 2024-10-26 01:52:50.119033378 +0000 UTC m=+62.650993428 (delta=87.235405ms)
	I1026 01:52:50.233948   54330 fix.go:200] guest clock delta is within tolerance: 87.235405ms
	I1026 01:52:50.233968   54330 start.go:83] releasing machines lock for "kubernetes-upgrade-970804", held for 27.737813361s
	I1026 01:52:50.233997   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:50.234251   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:52:50.236868   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.237200   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:50.237236   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.237362   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:50.237898   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:50.238095   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:52:50.238209   54330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:52:50.238263   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:50.238473   54330 ssh_runner.go:195] Run: cat /version.json
	I1026 01:52:50.238497   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:52:50.241364   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.241558   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.241758   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:50.241788   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.241935   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:50.241955   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:50.241980   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:50.242109   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:52:50.242200   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:50.242278   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:52:50.242354   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:50.242381   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:52:50.242547   54330 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:52:50.242553   54330 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:52:50.353104   54330 ssh_runner.go:195] Run: systemctl --version
	I1026 01:52:50.358726   54330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:52:50.514622   54330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:52:50.520513   54330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:52:50.520584   54330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:52:50.535529   54330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:52:50.535554   54330 start.go:495] detecting cgroup driver to use...
	I1026 01:52:50.535620   54330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:52:50.553256   54330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:52:50.566159   54330 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:52:50.566219   54330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:52:50.578648   54330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:52:50.590985   54330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:52:50.710752   54330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:52:50.873492   54330 docker.go:233] disabling docker service ...
	I1026 01:52:50.873563   54330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:52:50.890239   54330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:52:50.903439   54330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:52:51.030366   54330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:52:51.152847   54330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:52:51.166650   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:52:51.183687   54330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 01:52:51.183754   54330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:52:51.193716   54330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:52:51.193797   54330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:52:51.204025   54330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:52:51.214457   54330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:52:51.224562   54330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:52:51.236979   54330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:52:51.248719   54330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:52:51.248789   54330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:52:51.261718   54330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:52:51.272833   54330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:52:51.396342   54330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:52:51.486934   54330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:52:51.487032   54330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:52:51.491993   54330 start.go:563] Will wait 60s for crictl version
	I1026 01:52:51.492046   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:51.495777   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:52:51.540066   54330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:52:51.540144   54330 ssh_runner.go:195] Run: crio --version
	I1026 01:52:51.574253   54330 ssh_runner.go:195] Run: crio --version
	I1026 01:52:51.606171   54330 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1026 01:52:51.607367   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:52:51.610380   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:51.610728   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:52:37 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:52:51.610753   54330 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:52:51.611020   54330 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 01:52:51.615051   54330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:52:51.626573   54330 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:52:51.626716   54330 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 01:52:51.626771   54330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:52:51.661265   54330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 01:52:51.661332   54330 ssh_runner.go:195] Run: which lz4
	I1026 01:52:51.665402   54330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:52:51.669944   54330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:52:51.669981   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1026 01:52:53.135636   54330 crio.go:462] duration metric: took 1.470280477s to copy over tarball
	I1026 01:52:53.135720   54330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:52:55.829223   54330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693467529s)
	I1026 01:52:55.829265   54330 crio.go:469] duration metric: took 2.693599589s to extract the tarball
	I1026 01:52:55.829276   54330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:52:55.878612   54330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:52:55.929807   54330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 01:52:55.929838   54330 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 01:52:55.929901   54330 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:52:55.929941   54330 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:55.929980   54330 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:55.929942   54330 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:55.930027   54330 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1026 01:52:55.930026   54330 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:55.929984   54330 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:55.929947   54330 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1026 01:52:55.931533   54330 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:55.931552   54330 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:55.931564   54330 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1026 01:52:55.931582   54330 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1026 01:52:55.931598   54330 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:55.931625   54330 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:55.931633   54330 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:52:55.931650   54330 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.141288   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1026 01:52:56.187192   54330 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1026 01:52:56.187239   54330 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1026 01:52:56.187295   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.191853   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:52:56.192363   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:56.197175   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:56.197744   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.199727   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:56.209854   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1026 01:52:56.210953   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:56.382787   54330 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1026 01:52:56.382844   54330 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:56.382854   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:52:56.382886   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.416057   54330 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1026 01:52:56.416098   54330 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:56.416144   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.416234   54330 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1026 01:52:56.416259   54330 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.416283   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.416343   54330 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1026 01:52:56.416364   54330 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:56.416385   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.432014   54330 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1026 01:52:56.432049   54330 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1026 01:52:56.432063   54330 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1026 01:52:56.432066   54330 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:56.432107   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.432107   54330 ssh_runner.go:195] Run: which crictl
	I1026 01:52:56.473294   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:56.473293   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:52:56.473371   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:56.473442   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:56.473475   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.473511   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:56.473546   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:52:56.678307   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:52:56.678416   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:56.678435   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1026 01:52:56.678529   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:56.678671   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:56.678759   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.678846   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:56.841922   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:52:56.841921   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:52:56.841991   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:52:56.842006   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:52:56.842044   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:52:56.842072   54330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:52:56.975169   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1026 01:52:57.006070   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1026 01:52:57.006128   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1026 01:52:57.006172   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1026 01:52:57.006213   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1026 01:52:57.006309   54330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1026 01:52:57.264366   54330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:52:57.415125   54330 cache_images.go:92] duration metric: took 1.485268924s to LoadCachedImages
	W1026 01:52:57.415224   54330 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1026 01:52:57.415250   54330 kubeadm.go:934] updating node { 192.168.72.48 8443 v1.20.0 crio true true} ...
	I1026 01:52:57.415372   54330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-970804 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:52:57.415462   54330 ssh_runner.go:195] Run: crio config
	I1026 01:52:57.481990   54330 cni.go:84] Creating CNI manager for ""
	I1026 01:52:57.482019   54330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:52:57.482031   54330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:52:57.482056   54330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-970804 NodeName:kubernetes-upgrade-970804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 01:52:57.482224   54330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-970804"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:52:57.482302   54330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1026 01:52:57.495445   54330 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:52:57.495519   54330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:52:57.508108   54330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1026 01:52:57.530454   54330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:52:57.553602   54330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1026 01:52:57.575863   54330 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I1026 01:52:57.580553   54330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:52:57.598282   54330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:52:57.778345   54330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:52:57.804861   54330 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804 for IP: 192.168.72.48
	I1026 01:52:57.804890   54330 certs.go:194] generating shared ca certs ...
	I1026 01:52:57.804919   54330 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:57.805121   54330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:52:57.805206   54330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:52:57.805225   54330 certs.go:256] generating profile certs ...
	I1026 01:52:57.805306   54330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.key
	I1026 01:52:57.805328   54330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.crt with IP's: []
	I1026 01:52:57.985445   54330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.crt ...
	I1026 01:52:57.985530   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.crt: {Name:mkc720dff287ca5be72c86085639d30d8040bd34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:57.985760   54330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.key ...
	I1026 01:52:57.985807   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.key: {Name:mkaa3f8bea3753d4c77db93f9abfc4b7bfcf42da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:57.985940   54330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key.05758ba0
	I1026 01:52:57.985976   54330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt.05758ba0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.48]
	I1026 01:52:58.126178   54330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt.05758ba0 ...
	I1026 01:52:58.126209   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt.05758ba0: {Name:mk32f5b60fd353268db6f0d5a84ed9c4b76ffd37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:58.140463   54330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key.05758ba0 ...
	I1026 01:52:58.140505   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key.05758ba0: {Name:mk50be4847d3097b38f224f4f171e6f595881df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:58.140616   54330 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt.05758ba0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt
	I1026 01:52:58.140726   54330 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key.05758ba0 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key
	I1026 01:52:58.140798   54330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key
	I1026 01:52:58.140815   54330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.crt with IP's: []
	I1026 01:52:58.407180   54330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.crt ...
	I1026 01:52:58.407213   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.crt: {Name:mke0621ee00a93bd7446ec76260c5d8df8a5ba4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:58.407402   54330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key ...
	I1026 01:52:58.407425   54330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key: {Name:mk710e984b39a039f9da21d2fd743128346b6696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:52:58.407686   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:52:58.407747   54330 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:52:58.407763   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:52:58.407797   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:52:58.407833   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:52:58.407867   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:52:58.407919   54330 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:52:58.408533   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:52:58.439057   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:52:58.465776   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:52:58.498677   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:52:58.526690   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:52:58.555378   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:52:58.584182   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:52:58.617964   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:52:58.646988   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:52:58.672274   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:52:58.696162   54330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:52:58.724724   54330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:52:58.742559   54330 ssh_runner.go:195] Run: openssl version
	I1026 01:52:58.748676   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:52:58.762753   54330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:52:58.768208   54330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:52:58.768269   54330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:52:58.775851   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:52:58.791223   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:52:58.813846   54330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:52:58.818291   54330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:52:58.818364   54330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:52:58.825073   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:52:58.836933   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:52:58.848130   54330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:52:58.854318   54330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:52:58.854388   54330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:52:58.861065   54330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:52:58.877292   54330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:52:58.885617   54330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:52:58.885679   54330 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:52:58.885750   54330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:52:58.885811   54330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:52:58.936961   54330 cri.go:89] found id: ""
	I1026 01:52:58.937032   54330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:52:58.954120   54330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:52:58.966042   54330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:52:58.976727   54330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:52:58.976745   54330 kubeadm.go:157] found existing configuration files:
	
	I1026 01:52:58.976785   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:52:58.985932   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:52:58.986009   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:52:58.996615   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:52:59.005870   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:52:59.005944   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:52:59.017611   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:52:59.029730   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:52:59.029823   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:52:59.042147   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:52:59.053635   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:52:59.053711   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:52:59.068241   54330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:52:59.218419   54330 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 01:52:59.218501   54330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:52:59.373638   54330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:52:59.373807   54330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:52:59.373951   54330 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:52:59.597790   54330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:52:59.888023   54330 out.go:235]   - Generating certificates and keys ...
	I1026 01:52:59.888170   54330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:52:59.888306   54330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:52:59.888423   54330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:52:59.917197   54330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:53:00.083239   54330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:53:00.126798   54330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:53:00.235845   54330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:53:00.236278   54330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	I1026 01:53:00.377181   54330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:53:00.377455   54330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	I1026 01:53:00.748318   54330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:53:00.902703   54330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:53:01.023433   54330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:53:01.023542   54330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:53:01.179571   54330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:53:01.433768   54330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:53:01.635281   54330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:53:01.717524   54330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:53:01.735904   54330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:53:01.738229   54330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:53:01.738303   54330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:53:01.901320   54330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:53:01.965715   54330 out.go:235]   - Booting up control plane ...
	I1026 01:53:01.965853   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:53:01.965962   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:53:01.966094   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:53:01.966219   54330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:53:01.966450   54330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:53:41.925437   54330 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 01:53:41.926243   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:53:41.926501   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:53:46.926413   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:53:46.926698   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:53:56.926407   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:53:56.926627   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:54:16.927079   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:54:16.927369   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:54:56.928079   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:54:56.928376   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:54:56.928397   54330 kubeadm.go:310] 
	I1026 01:54:56.928474   54330 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 01:54:56.928533   54330 kubeadm.go:310] 		timed out waiting for the condition
	I1026 01:54:56.928542   54330 kubeadm.go:310] 
	I1026 01:54:56.928584   54330 kubeadm.go:310] 	This error is likely caused by:
	I1026 01:54:56.928632   54330 kubeadm.go:310] 		- The kubelet is not running
	I1026 01:54:56.928783   54330 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 01:54:56.928803   54330 kubeadm.go:310] 
	I1026 01:54:56.928943   54330 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 01:54:56.928992   54330 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 01:54:56.929032   54330 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 01:54:56.929041   54330 kubeadm.go:310] 
	I1026 01:54:56.929183   54330 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 01:54:56.929313   54330 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 01:54:56.929328   54330 kubeadm.go:310] 
	I1026 01:54:56.929495   54330 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 01:54:56.929636   54330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 01:54:56.929745   54330 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 01:54:56.929853   54330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 01:54:56.929865   54330 kubeadm.go:310] 
	I1026 01:54:56.930361   54330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:54:56.930473   54330 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 01:54:56.930592   54330 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 01:54:56.930699   54330 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-970804 localhost] and IPs [192.168.72.48 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 01:54:56.930742   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 01:54:57.901098   54330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:54:57.914851   54330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:54:57.923759   54330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:54:57.923779   54330 kubeadm.go:157] found existing configuration files:
	
	I1026 01:54:57.923843   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:54:57.932021   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:54:57.932077   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:54:57.940566   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:54:57.948602   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:54:57.948670   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:54:57.957383   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:54:57.966212   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:54:57.966258   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:54:57.974923   54330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:54:57.984611   54330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:54:57.984679   54330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:54:57.993788   54330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:54:58.207082   54330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:56:54.552487   54330 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 01:56:54.552611   54330 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 01:56:54.554311   54330 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 01:56:54.554368   54330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:56:54.554495   54330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:56:54.554643   54330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:56:54.554796   54330 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:56:54.554884   54330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:56:54.580229   54330 out.go:235]   - Generating certificates and keys ...
	I1026 01:56:54.580367   54330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:56:54.580453   54330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:56:54.580526   54330 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 01:56:54.580577   54330 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 01:56:54.580695   54330 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 01:56:54.580773   54330 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 01:56:54.580867   54330 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 01:56:54.580964   54330 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 01:56:54.581078   54330 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 01:56:54.581189   54330 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 01:56:54.581249   54330 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 01:56:54.581332   54330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:56:54.581404   54330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:56:54.581493   54330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:56:54.581589   54330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:56:54.581666   54330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:56:54.581830   54330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:56:54.581952   54330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:56:54.582010   54330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:56:54.582101   54330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:56:54.664413   54330 out.go:235]   - Booting up control plane ...
	I1026 01:56:54.664516   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:56:54.664650   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:56:54.664767   54330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:56:54.664894   54330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:56:54.665119   54330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:56:54.665196   54330 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 01:56:54.665307   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:54.665571   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:54.665655   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:54.665858   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:54.665949   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:54.666121   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:54.666192   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:54.666349   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:54.666446   54330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:54.666668   54330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:54.666676   54330 kubeadm.go:310] 
	I1026 01:56:54.666730   54330 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 01:56:54.666795   54330 kubeadm.go:310] 		timed out waiting for the condition
	I1026 01:56:54.666811   54330 kubeadm.go:310] 
	I1026 01:56:54.666876   54330 kubeadm.go:310] 	This error is likely caused by:
	I1026 01:56:54.666925   54330 kubeadm.go:310] 		- The kubelet is not running
	I1026 01:56:54.667072   54330 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 01:56:54.667080   54330 kubeadm.go:310] 
	I1026 01:56:54.667172   54330 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 01:56:54.667203   54330 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 01:56:54.667231   54330 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 01:56:54.667237   54330 kubeadm.go:310] 
	I1026 01:56:54.667329   54330 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 01:56:54.667413   54330 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 01:56:54.667425   54330 kubeadm.go:310] 
	I1026 01:56:54.667521   54330 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 01:56:54.667602   54330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 01:56:54.667677   54330 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 01:56:54.667738   54330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 01:56:54.667757   54330 kubeadm.go:310] 
	I1026 01:56:54.667799   54330 kubeadm.go:394] duration metric: took 3m55.782125413s to StartCluster
	I1026 01:56:54.667833   54330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:56:54.667888   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:56:54.714543   54330 cri.go:89] found id: ""
	I1026 01:56:54.714575   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.714585   54330 logs.go:284] No container was found matching "kube-apiserver"
	I1026 01:56:54.714597   54330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 01:56:54.714659   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:56:54.752157   54330 cri.go:89] found id: ""
	I1026 01:56:54.752187   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.752197   54330 logs.go:284] No container was found matching "etcd"
	I1026 01:56:54.752205   54330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 01:56:54.752273   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:56:54.788924   54330 cri.go:89] found id: ""
	I1026 01:56:54.788961   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.788972   54330 logs.go:284] No container was found matching "coredns"
	I1026 01:56:54.788980   54330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:56:54.789063   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:56:54.823533   54330 cri.go:89] found id: ""
	I1026 01:56:54.823563   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.823575   54330 logs.go:284] No container was found matching "kube-scheduler"
	I1026 01:56:54.823584   54330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:56:54.823658   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:56:54.860235   54330 cri.go:89] found id: ""
	I1026 01:56:54.860259   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.860274   54330 logs.go:284] No container was found matching "kube-proxy"
	I1026 01:56:54.860280   54330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:56:54.860330   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:56:54.897912   54330 cri.go:89] found id: ""
	I1026 01:56:54.897938   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.897950   54330 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 01:56:54.897959   54330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 01:56:54.898007   54330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:56:54.934414   54330 cri.go:89] found id: ""
	I1026 01:56:54.934444   54330 logs.go:282] 0 containers: []
	W1026 01:56:54.934456   54330 logs.go:284] No container was found matching "kindnet"
	I1026 01:56:54.934467   54330 logs.go:123] Gathering logs for kubelet ...
	I1026 01:56:54.934482   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 01:56:54.986687   54330 logs.go:123] Gathering logs for dmesg ...
	I1026 01:56:54.986720   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 01:56:54.999392   54330 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:56:54.999420   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 01:56:55.125275   54330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 01:56:55.125303   54330 logs.go:123] Gathering logs for CRI-O ...
	I1026 01:56:55.125317   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 01:56:55.244269   54330 logs.go:123] Gathering logs for container status ...
	I1026 01:56:55.244357   54330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1026 01:56:55.290305   54330 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 01:56:55.290370   54330 out.go:270] * 
	* 
	W1026 01:56:55.290435   54330 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 01:56:55.290455   54330 out.go:270] * 
	* 
	W1026 01:56:55.291366   54330 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:56:55.294648   54330 out.go:201] 
	W1026 01:56:55.295832   54330 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 01:56:55.295882   54330 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 01:56:55.295909   54330 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 01:56:55.297381   54330 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-970804
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-970804: (1.347424513s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-970804 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-970804 status --format={{.Host}}: exit status 7 (67.01364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.886823743s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-970804 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.103775ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-970804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-970804
	    minikube start -p kubernetes-upgrade-970804 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9708042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-970804 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1026 01:58:52.961188   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m44.61433252s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-970804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-970804" primary control-plane node in "kubernetes-upgrade-970804" cluster
	* Updating the running kvm2 "kubernetes-upgrade-970804" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:57:32.833489   61346 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:57:32.833598   61346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:32.833608   61346 out.go:358] Setting ErrFile to fd 2...
	I1026 01:57:32.833615   61346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:32.833845   61346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:57:32.834398   61346 out.go:352] Setting JSON to false
	I1026 01:57:32.835320   61346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5993,"bootTime":1729901860,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:57:32.835418   61346 start.go:139] virtualization: kvm guest
	I1026 01:57:32.837312   61346 out.go:177] * [kubernetes-upgrade-970804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:57:32.839155   61346 notify.go:220] Checking for updates...
	I1026 01:57:32.839182   61346 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:57:32.840404   61346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:57:32.841605   61346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:57:32.842908   61346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:57:32.844031   61346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:57:32.845288   61346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:57:32.846821   61346 config.go:182] Loaded profile config "kubernetes-upgrade-970804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:32.847214   61346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:57:32.847291   61346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:57:32.862240   61346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1026 01:57:32.862572   61346 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:57:32.863104   61346 main.go:141] libmachine: Using API Version  1
	I1026 01:57:32.863129   61346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:57:32.863445   61346 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:57:32.863607   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:32.863784   61346 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:57:32.864059   61346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:57:32.864091   61346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:57:32.878340   61346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33293
	I1026 01:57:32.878758   61346 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:57:32.879230   61346 main.go:141] libmachine: Using API Version  1
	I1026 01:57:32.879254   61346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:57:32.879608   61346 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:57:32.879801   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:32.915309   61346 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 01:57:32.916595   61346 start.go:297] selected driver: kvm2
	I1026 01:57:32.916611   61346 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:57:32.916732   61346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:57:32.917481   61346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:57:32.917564   61346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:57:32.934038   61346 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:57:32.934397   61346 cni.go:84] Creating CNI manager for ""
	I1026 01:57:32.934443   61346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:57:32.934487   61346 start.go:340] cluster config:
	{Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-970804 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:57:32.934587   61346 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:57:32.936149   61346 out.go:177] * Starting "kubernetes-upgrade-970804" primary control-plane node in "kubernetes-upgrade-970804" cluster
	I1026 01:57:32.937118   61346 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:57:32.937146   61346 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 01:57:32.937153   61346 cache.go:56] Caching tarball of preloaded images
	I1026 01:57:32.937227   61346 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:57:32.937237   61346 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 01:57:32.937315   61346 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/config.json ...
	I1026 01:57:32.937553   61346 start.go:360] acquireMachinesLock for kubernetes-upgrade-970804: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:57:32.937597   61346 start.go:364] duration metric: took 24.318µs to acquireMachinesLock for "kubernetes-upgrade-970804"
	I1026 01:57:32.937611   61346 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:57:32.937619   61346 fix.go:54] fixHost starting: 
	I1026 01:57:32.937906   61346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:57:32.937938   61346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:57:32.951865   61346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
	I1026 01:57:32.952303   61346 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:57:32.952815   61346 main.go:141] libmachine: Using API Version  1
	I1026 01:57:32.952839   61346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:57:32.953151   61346 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:57:32.953314   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:32.953479   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetState
	I1026 01:57:32.955058   61346 fix.go:112] recreateIfNeeded on kubernetes-upgrade-970804: state=Running err=<nil>
	W1026 01:57:32.955093   61346 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:57:32.956728   61346 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-970804" VM ...
	I1026 01:57:32.957849   61346 machine.go:93] provisionDockerMachine start ...
	I1026 01:57:32.957866   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:32.958050   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:32.960351   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:32.960769   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:32.960796   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:32.960966   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:32.961133   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:32.961275   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:32.961403   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:32.961570   61346 main.go:141] libmachine: Using SSH client type: native
	I1026 01:57:32.961810   61346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:57:32.961823   61346 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:57:33.065368   61346 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-970804
	
	I1026 01:57:33.065398   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:57:33.065629   61346 buildroot.go:166] provisioning hostname "kubernetes-upgrade-970804"
	I1026 01:57:33.065662   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:57:33.065859   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:33.068453   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.068761   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.068806   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.068911   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:33.069064   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.069209   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.069335   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:33.069499   61346 main.go:141] libmachine: Using SSH client type: native
	I1026 01:57:33.069668   61346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:57:33.069681   61346 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-970804 && echo "kubernetes-upgrade-970804" | sudo tee /etc/hostname
	I1026 01:57:33.193390   61346 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-970804
	
	I1026 01:57:33.193439   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:33.195850   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.196224   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.196254   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.196443   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:33.196629   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.196768   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.196883   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:33.197045   61346 main.go:141] libmachine: Using SSH client type: native
	I1026 01:57:33.197250   61346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:57:33.197272   61346 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-970804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-970804/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-970804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:57:33.306085   61346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:57:33.306111   61346 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:57:33.306129   61346 buildroot.go:174] setting up certificates
	I1026 01:57:33.306139   61346 provision.go:84] configureAuth start
	I1026 01:57:33.306147   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetMachineName
	I1026 01:57:33.306438   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:57:33.308839   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.309158   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.309189   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.309358   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:33.311627   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.311950   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.311979   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.312110   61346 provision.go:143] copyHostCerts
	I1026 01:57:33.312176   61346 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:57:33.312189   61346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:57:33.312242   61346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:57:33.312356   61346 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:57:33.312365   61346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:57:33.312392   61346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:57:33.312441   61346 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:57:33.312447   61346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:57:33.312465   61346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:57:33.312513   61346 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-970804 san=[127.0.0.1 192.168.72.48 kubernetes-upgrade-970804 localhost minikube]
	I1026 01:57:33.496920   61346 provision.go:177] copyRemoteCerts
	I1026 01:57:33.496980   61346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:57:33.497003   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:33.499624   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.499998   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.500022   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.500227   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:33.500419   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.500583   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:33.500689   61346 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:57:33.585932   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:57:33.608992   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 01:57:33.633801   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:57:33.657483   61346 provision.go:87] duration metric: took 351.332684ms to configureAuth
	I1026 01:57:33.657513   61346 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:57:33.657693   61346 config.go:182] Loaded profile config "kubernetes-upgrade-970804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:33.657760   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:33.660428   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.660809   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:33.660838   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:33.661057   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:33.661219   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.661385   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:33.661582   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:33.661759   61346 main.go:141] libmachine: Using SSH client type: native
	I1026 01:57:33.661925   61346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:57:33.661940   61346 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:57:34.407801   61346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:57:34.407823   61346 machine.go:96] duration metric: took 1.449962999s to provisionDockerMachine
	I1026 01:57:34.407835   61346 start.go:293] postStartSetup for "kubernetes-upgrade-970804" (driver="kvm2")
	I1026 01:57:34.407845   61346 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:57:34.407860   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:34.408179   61346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:57:34.408216   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:34.411151   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.411498   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:34.411529   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.411725   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:34.411889   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:34.412039   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:34.412178   61346 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:57:34.500081   61346 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:57:34.503887   61346 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:57:34.503909   61346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:57:34.503967   61346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:57:34.504075   61346 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:57:34.504197   61346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:57:34.512865   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:57:34.535435   61346 start.go:296] duration metric: took 127.58651ms for postStartSetup
	I1026 01:57:34.535483   61346 fix.go:56] duration metric: took 1.597863925s for fixHost
	I1026 01:57:34.535528   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:34.538340   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.538753   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:34.538781   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.538992   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:34.539203   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:34.539393   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:34.539507   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:34.539685   61346 main.go:141] libmachine: Using SSH client type: native
	I1026 01:57:34.539873   61346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I1026 01:57:34.539888   61346 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:57:34.641695   61346 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729907854.600603223
	
	I1026 01:57:34.641722   61346 fix.go:216] guest clock: 1729907854.600603223
	I1026 01:57:34.641731   61346 fix.go:229] Guest: 2024-10-26 01:57:34.600603223 +0000 UTC Remote: 2024-10-26 01:57:34.535488962 +0000 UTC m=+1.739173109 (delta=65.114261ms)
	I1026 01:57:34.641751   61346 fix.go:200] guest clock delta is within tolerance: 65.114261ms
	I1026 01:57:34.641756   61346 start.go:83] releasing machines lock for "kubernetes-upgrade-970804", held for 1.704150571s
	I1026 01:57:34.641786   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:34.642026   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:57:34.644940   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.645306   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:34.645337   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.645517   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:34.646132   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:34.646310   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .DriverName
	I1026 01:57:34.646461   61346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:57:34.646503   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:34.646515   61346 ssh_runner.go:195] Run: cat /version.json
	I1026 01:57:34.646537   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHHostname
	I1026 01:57:34.649335   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.649710   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:34.649743   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.649813   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.649871   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:34.650046   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:34.650194   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:34.650352   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:57:34.650364   61346 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:57:34.650385   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:57:34.650543   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHPort
	I1026 01:57:34.650733   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHKeyPath
	I1026 01:57:34.650856   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetSSHUsername
	I1026 01:57:34.650995   61346 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/kubernetes-upgrade-970804/id_rsa Username:docker}
	I1026 01:57:34.796948   61346 ssh_runner.go:195] Run: systemctl --version
	I1026 01:57:34.807836   61346 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:57:35.037156   61346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:57:35.045879   61346 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:57:35.045953   61346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:57:35.072884   61346 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:57:35.072915   61346 start.go:495] detecting cgroup driver to use...
	I1026 01:57:35.072989   61346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:57:35.138824   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:57:35.164235   61346 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:57:35.164315   61346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:57:35.187975   61346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:57:35.206490   61346 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:57:35.382635   61346 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:57:35.544274   61346 docker.go:233] disabling docker service ...
	I1026 01:57:35.544354   61346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:57:35.562762   61346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:57:35.578707   61346 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:57:35.729996   61346 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:57:35.890481   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:57:35.903813   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:57:35.926876   61346 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 01:57:35.926946   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:35.937948   61346 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:57:35.938017   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:35.947669   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:35.960866   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:35.971537   61346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:57:35.981428   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:35.994110   61346 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:36.006411   61346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:57:36.016504   61346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:57:36.026847   61346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:57:36.036606   61346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:57:36.213845   61346 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:59:06.368204   61346 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.154323698s)
	I1026 01:59:06.368245   61346 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:59:06.368305   61346 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:59:06.373619   61346 start.go:563] Will wait 60s for crictl version
	I1026 01:59:06.373672   61346 ssh_runner.go:195] Run: which crictl
	I1026 01:59:06.377153   61346 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:59:06.413404   61346 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:59:06.413498   61346 ssh_runner.go:195] Run: crio --version
	I1026 01:59:06.441064   61346 ssh_runner.go:195] Run: crio --version
	I1026 01:59:06.468865   61346 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 01:59:06.470067   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) Calling .GetIP
	I1026 01:59:06.472485   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:59:06.472839   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:51:fe", ip: ""} in network mk-kubernetes-upgrade-970804: {Iface:virbr4 ExpiryTime:2024-10-26 02:57:07 +0000 UTC Type:0 Mac:52:54:00:33:51:fe Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:kubernetes-upgrade-970804 Clientid:01:52:54:00:33:51:fe}
	I1026 01:59:06.472871   61346 main.go:141] libmachine: (kubernetes-upgrade-970804) DBG | domain kubernetes-upgrade-970804 has defined IP address 192.168.72.48 and MAC address 52:54:00:33:51:fe in network mk-kubernetes-upgrade-970804
	I1026 01:59:06.473067   61346 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 01:59:06.476868   61346 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:59:06.476967   61346 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 01:59:06.477009   61346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:59:06.524640   61346 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:59:06.524661   61346 crio.go:433] Images already preloaded, skipping extraction
	I1026 01:59:06.524713   61346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:59:06.556603   61346 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 01:59:06.556624   61346 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:59:06.556634   61346 kubeadm.go:934] updating node { 192.168.72.48 8443 v1.31.2 crio true true} ...
	I1026 01:59:06.556752   61346 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-970804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:59:06.556832   61346 ssh_runner.go:195] Run: crio config
	I1026 01:59:06.603243   61346 cni.go:84] Creating CNI manager for ""
	I1026 01:59:06.603265   61346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:59:06.603275   61346 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:59:06.603295   61346 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-970804 NodeName:kubernetes-upgrade-970804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:59:06.603426   61346 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-970804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:59:06.603488   61346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:59:06.612952   61346 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:59:06.613016   61346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:59:06.621822   61346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1026 01:59:06.636842   61346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:59:06.652002   61346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1026 01:59:06.667223   61346 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I1026 01:59:06.670651   61346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:59:06.811067   61346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:59:06.825404   61346 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804 for IP: 192.168.72.48
	I1026 01:59:06.825436   61346 certs.go:194] generating shared ca certs ...
	I1026 01:59:06.825458   61346 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:59:06.825614   61346 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:59:06.825667   61346 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:59:06.825677   61346 certs.go:256] generating profile certs ...
	I1026 01:59:06.825745   61346 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.key
	I1026 01:59:06.825791   61346 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key.05758ba0
	I1026 01:59:06.825837   61346 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key
	I1026 01:59:06.825979   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:59:06.826009   61346 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:59:06.826019   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:59:06.826042   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:59:06.826064   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:59:06.826084   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:59:06.826119   61346 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:59:06.826799   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:59:06.848736   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:59:06.871278   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:59:06.892858   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:59:06.914679   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 01:59:06.935833   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:59:06.956716   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:59:06.978337   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:59:07.000373   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:59:07.021694   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:59:07.042610   61346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:59:07.063390   61346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:59:07.078281   61346 ssh_runner.go:195] Run: openssl version
	I1026 01:59:07.083747   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:59:07.093650   61346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:59:07.097541   61346 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:59:07.097587   61346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:59:07.102760   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:59:07.111577   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:59:07.121675   61346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:59:07.125679   61346 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:59:07.125729   61346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:59:07.131092   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:59:07.140034   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:59:07.149830   61346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:59:07.153850   61346 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:59:07.153907   61346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:59:07.159107   61346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:59:07.167631   61346 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:59:07.172032   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:59:07.177487   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:59:07.182722   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:59:07.187993   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:59:07.193235   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:59:07.198518   61346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:59:07.203916   61346 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-970804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-970804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:59:07.204029   61346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:59:07.204080   61346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:59:07.244173   61346 cri.go:89] found id: "9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3"
	I1026 01:59:07.244194   61346 cri.go:89] found id: "67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7"
	I1026 01:59:07.244199   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 01:59:07.244202   61346 cri.go:89] found id: "83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c"
	I1026 01:59:07.244204   61346 cri.go:89] found id: "45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d"
	I1026 01:59:07.244207   61346 cri.go:89] found id: "b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca"
	I1026 01:59:07.244209   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 01:59:07.244212   61346 cri.go:89] found id: ""
	I1026 01:59:07.244256   61346 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 01:59:07.273335   61346 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4/userdata","rootfs":"/var/lib/containers/storage/overlay/0d02ef46efca2e716bc820281f5c6b8e0e524221e96691b6324bf57ab3585bb7/merged","created":"2024-10-26T01:57:26.075610968Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492643353Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"a0f307d03f5ab1b21c66a93d0c1d2592\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.72.48:8443\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/poda0f307d03f5ab1b21c66a93d0c1d2592","io.kubernetes.cri-o.ContainerID":"0686c24b
e11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:25.969465959Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-970804\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-apiserver\",\"io.kuberne
tes.pod.uid\":\"a0f307d03f5ab1b21c66a93d0c1d2592\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-970804_a0f307d03f5ab1b21c66a93d0c1d2592/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-970804\",\"uid\":\"a0f307d03f5ab1b21c66a93d0c1d2592\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d02ef46efca2e716bc820281f5c6b8e0e524221e96691b6324bf57ab3585bb7/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-
o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a0f307d03f5ab1b21c66a93d0c1d2592","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.72.48:8443","kubernetes.io/config.hash":"a0f307d03f5ab1b21c66a93d0c1d2592","kubernetes.io/config.seen":"
2024-10-26T01:57:22.492643353Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14/userdata","rootfs":"/var/lib/containers/storage/overlay/b24e3c84f973f11fe7690c8da6e473d94399a71dd2ffe3db2f437fd543947d0c/merged","created":"2024-10-26T01:57:34.92690472Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"16c835f9","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"16c835f9\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termi
nation-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.856498705Z","io.kubernetes.cri-o.Image":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.2","io.kubernetes.cri-o.ImageRef":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"76aba175443e2543433bcdb489ed7385\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-970804_76aba175443e2543433bcdb489ed7385/kube-scheduler/1.l
og","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b24e3c84f973f11fe7690c8da6e473d94399a71dd2ffe3db2f437fd543947d0c/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/e
tc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/76aba175443e2543433bcdb489ed7385/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/76aba175443e2543433bcdb489ed7385/containers/kube-scheduler/e6d55a78\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.hash":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.seen":"2024-10-26T01:57:22.492649571Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e0
9a09402a8c93d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d/userdata","rootfs":"/var/lib/containers/storage/overlay/0dbb78346e57e2915f4e187b4e9759163f98587d811a0f77e16ddc10ee7ded89/merged","created":"2024-10-26T01:57:26.303601677Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c6927529","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c6927529\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"45e904cbce05f9b2ba918b078bc
a2d856c9c8f6d4ec3a3d39e09a09402a8c93d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:26.198744896Z","io.kubernetes.cri-o.Image":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.2","io.kubernetes.cri-o.ImageRef":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a0f307d03f5ab1b21c66a93d0c1d2592\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-970804_a0f307d03f5ab1b21c66a93d0c1d2592/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dbb78346e57e2915f4e187b4e9759163f98587d811a0f77e16ddc
10ee7ded89/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a0f307d03f5ab1b21c66a93d0c1d2592/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"
host_path\":\"/var/lib/kubelet/pods/a0f307d03f5ab1b21c66a93d0c1d2592/containers/kube-apiserver/594b7545\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a0f307d03f5ab1b21c66a93d0c1d2592","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.72.48:8443","kubernetes.io/config.hash":"a0f307d03f5ab1b21c66a93d0c1d2592","kubernetes.io/config.seen":"2024-10
-26T01:57:22.492643353Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a/userdata","rootfs":"/var/lib/containers/storage/overlay/74118b3563bbcf4f142db28b06942289628dcbcc26699584ffd93412cfd90b90/merged","created":"2024-10-26T01:57:26.094209606Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"010c84f8ca6b96fa6474e922217a9c93\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.72.48:2379\",\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.539101137Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod010c84f8ca6b96fa6474e922217a9c93","io.kubernetes.cri-o.ContainerID":"504e0
d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:25.972119198Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"010c84f8ca6b96fa6474e922217a9c93\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-970804\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"co
ntrol-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-970804_010c84f8ca6b96fa6474e922217a9c93/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-970804\",\"uid\":\"010c84f8ca6b96fa6474e922217a9c93\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/74118b3563bbcf4f142db28b06942289628dcbcc26699584ffd93412cfd90b90/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/
run/containers/storage/overlay-containers/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"010c84f8ca6b96fa6474e922217a9c93","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.72.48:2379","kubernetes.io/config.hash":"010c84f8ca6b96fa6474e922217a9c93","kubernetes.io/config.seen":"2024-10-26T01:57:22.539101137Z","kubernetes.io/config.source":"file","tier":"control-plane"},"
owner":"root"},{"ociVersion":"1.0.2-dev","id":"67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7/userdata","rootfs":"/var/lib/containers/storage/overlay/c351f6059f59b79ef51d62c49e43eebf1bf12e44ee8834ffd8feb777a4eccc93/merged","created":"2024-10-26T01:57:34.959366559Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c6927529","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c6927529\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.po
d.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.870024104Z","io.kubernetes.cri-o.Image":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.2","io.kubernetes.cri-o.ImageRef":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a0f307d03f5ab1b21c66a93d0c1d2592\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-970804_a0f307d03f5ab1b21c66a93d0c1d2592/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernet
es.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c351f6059f59b79ef51d62c49e43eebf1bf12e44ee8834ffd8feb777a4eccc93/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a0f307d03f5ab1b21c66a93d0c1d2592/etc-hosts\"
,\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a0f307d03f5ab1b21c66a93d0c1d2592/containers/kube-apiserver/43c7365e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a0f307d03f5ab1b21c66a93d0c1d2592","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168
.72.48:8443","kubernetes.io/config.hash":"a0f307d03f5ab1b21c66a93d0c1d2592","kubernetes.io/config.seen":"2024-10-26T01:57:22.492643353Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b/userdata","rootfs":"/var/lib/containers/storage/overlay/fcc82ab029ad3ac55a2efc99513e58ed17ae32f649787568d816bc15d3d7016a/merged","created":"2024-10-26T01:57:26.083349784Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"76aba175443e2543433bcdb489ed7385\",\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492649571Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod76aba175443e2543433bcdb489ed7385","i
o.kubernetes.cri-o.ContainerID":"72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:25.993302803Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"76aba175443e2543433bcdb489ed7385\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kube
rnetes-upgrade-970804\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-970804_76aba175443e2543433bcdb489ed7385/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-970804\",\"uid\":\"76aba175443e2543433bcdb489ed7385\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fcc82ab029ad3ac55a2efc99513e58ed17ae32f649787568d816bc15d3d7016a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri
-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.hash":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.seen":"2024-10-26T01:57:22.492649571Z","kubernetes.io
/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e/userdata","rootfs":"/var/lib/containers/storage/overlay/bf4797aa2149b32da8057c7abe79c93ac6c54af132e35dbf3cddda9b8b2aa673/merged","created":"2024-10-26T01:57:26.273892048Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessa
gePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:26.183634189Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"010c84f8ca6b96fa6474e922217a9c93\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-970804_010c84f8ca6b96fa6474e922217a9c93/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/cont
ainers/storage/overlay/bf4797aa2149b32da8057c7abe79c93ac6c54af132e35dbf3cddda9b8b2aa673/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/010c84f8ca6b96fa6474e922217a9c93/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},
{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/010c84f8ca6b96fa6474e922217a9c93/containers/etcd/58dac98c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"010c84f8ca6b96fa6474e922217a9c93","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.72.48:2379","kubernetes.io/config.hash":"010c84f8ca6b96fa6474e922217a9c93","kubernetes.io/config.seen":"2024-10-26T01:57:22.539101137Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"
83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c/userdata","rootfs":"/var/lib/containers/storage/overlay/d8655b42ffab135d1400f71cca1033f4ca0475ace031b2b71546b1c614c13f4b/merged","created":"2024-10-26T01:57:26.323542522Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3111262b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3111262b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","i
o.kubernetes.cri-o.ContainerID":"83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:26.211180519Z","io.kubernetes.cri-o.Image":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.2","io.kubernetes.cri-o.ImageRef":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"75f6878c02c356168d8286fe4d911a46\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-970804_75f6878c02c356168d8286fe4d911a46/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubern
etes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d8655b42ffab135d1400f71cca1033f4ca0475ace031b2b71546b1c614c13f4b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/75f6878c02c3561
68d8286fe4d911a46/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/75f6878c02c356168d8286fe4d911a46/containers/kube-controller-manager/c7f5d7d8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.hash":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.seen":"2024-10-26T01:57:22.492648258Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c/userdata","rootfs":"/var/lib/containers/storage/overlay/550fc72f6258905a30c50c470c989cbccac279e02d7b87c60cfd7a19eabe0896/merged","created":"2024-10-26T01:57:26.087990752Z","annotations":{"component":"kube-controller-manager","io.contai
ner.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492648258Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"75f6878c02c356168d8286fe4d911a46\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod75f6878c02c356168d8286fe4d911a46","io.kubernetes.cri-o.ContainerID":"9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:25.978711064Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.
io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-970804\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"75f6878c02c356168d8286fe4d911a46\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-970804_75f6878c02c356168d8286fe4d911a46/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-970804\",\"uid\":\"75f6878c02c356168d8286fe4d911a46\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/550fc72f6258905a30c50c470c989cbccac279e02d7b87c60cfd7a19eabe0896/merged","io.
kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":
"/var/run/containers/storage/overlay-containers/9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.hash":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.seen":"2024-10-26T01:57:22.492648258Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3/userdata","rootfs":"/var/lib/containers/storage/overlay/d0edf29faf24d4c44ca6874119c28210c6098ff0918d58122ea549516e1d904c/merged","created":"2024-10-26T01:57:34.971362593Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3111262b","io.kubernete
s.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3111262b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.884541992Z","io.kubernetes.cri-o.Image":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.2","io.kubernetes.cri-o.ImageRef":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503"
,"io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"75f6878c02c356168d8286fe4d911a46\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-970804_75f6878c02c356168d8286fe4d911a46/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d0edf29faf24d4c44ca6874119c28210c6098ff0918d58122ea549516e1d904c/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1
dfdc17c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/75f6878c02c356168d8286fe4d911a46/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/75f6878c02c356168d8286fe4d911a46/containers/kube-controller-manager/a54970a9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_pa
th\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.hash":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.seen":"2024-10-26
T01:57:22.492648258Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464/userdata","rootfs":"/var/lib/containers/storage/overlay/85ae81369b991eef9ddc06ebc0d5cf7bbd35aca60ada962d77d144c05fe15f22/merged","created":"2024-10-26T01:57:34.759511743Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492649571Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"76aba175443e2543433bcdb489ed7385\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod76aba175443e2543433bcdb489ed7385","io.kubernetes.cri-o.ContainerID":"b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464","io.kubernetes.c
ri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.652389047Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"76aba175443e2543433bcdb489ed7385\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-970804\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-970804_76aba175443e2543433bcdb489ed7385/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-970804\",\"uid\":\"76aba175443e2543433bcdb489ed7385\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/85ae81369b991eef9ddc06ebc0d5cf7bbd35aca60ada962d77d144c05fe15f22/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath"
:"/var/run/containers/storage/overlay-containers/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.hash":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.seen":"2024-10-26T01:57:22.492649571Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b27e02
fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca/userdata","rootfs":"/var/lib/containers/storage/overlay/2e0c8a37b63187b61b0a966327a5dfd72c6109491fc0377b4a3ad9da0dc629c4/merged","created":"2024-10-26T01:57:26.27448839Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"16c835f9","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"16c835f9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri
-o.ContainerID":"b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-10-26T01:57:26.191318673Z","io.kubernetes.cri-o.Image":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.2","io.kubernetes.cri-o.ImageRef":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-970804\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"76aba175443e2543433bcdb489ed7385\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-970804_76aba175443e2543433bcdb489ed7385/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2e0c8a37b6
3187b61b0a966327a5dfd72c6109491fc0377b4a3ad9da0dc629c4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-970804_kube-system_76aba175443e2543433bcdb489ed7385_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/76aba175443e2543433bcdb489ed7385/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"
container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/76aba175443e2543433bcdb489ed7385/containers/kube-scheduler/72d684b2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.hash":"76aba175443e2543433bcdb489ed7385","kubernetes.io/config.seen":"2024-10-26T01:57:22.492649571Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c/u
serdata","rootfs":"/var/lib/containers/storage/overlay/6468363d0f12f7a7439e9125d42c54cbc40d0172961b1dac107a29f5de7ab472/merged","created":"2024-10-26T01:57:34.758746257Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492648258Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"75f6878c02c356168d8286fe4d911a46\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod75f6878c02c356168d8286fe4d911a46","io.kubernetes.cri-o.ContainerID":"c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.67134887Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cr
i-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-970804\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"75f6878c02c356168d8286fe4d911a46\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-970804_75f6878c02c356168d8286fe4d911a46/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-
970804\",\"uid\":\"75f6878c02c356168d8286fe4d911a46\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6468363d0f12f7a7439e9125d42c54cbc40d0172961b1dac107a29f5de7ab472/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c82d403161a10956de0f70befba5f13c95b6e57a
b4860fe3bdf3af7d1dfdc17c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-970804_kube-system_75f6878c02c356168d8286fe4d911a46_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.hash":"75f6878c02c356168d8286fe4d911a46","kubernetes.io/config.seen":"2024-10-26T01:57:22.492648258Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe/userdata","rootfs":"/var/l
ib/containers/storage/overlay/a79d66088b449a43a0315ca3f8e7873e25c403cae832f47c8baf9b4b621535db/merged","created":"2024-10-26T01:57:34.813305425Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.539101137Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"010c84f8ca6b96fa6474e922217a9c93\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.72.48:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod010c84f8ca6b96fa6474e922217a9c93","io.kubernetes.cri-o.ContainerID":"db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.686680478Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970
804","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"010c84f8ca6b96fa6474e922217a9c93\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-970804\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-970804_010c84f8ca6b96fa6474e922217a9c93/db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-970804\",\"uid\":\"010c84f8ca6b96fa6474e922217a9c93\",\"namespace\":\"kub
e-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a79d66088b449a43a0315ca3f8e7873e25c403cae832f47c8baf9b4b621535db/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-970
804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"010c84f8ca6b96fa6474e922217a9c93","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.72.48:2379","kubernetes.io/config.hash":"010c84f8ca6b96fa6474e922217a9c93","kubernetes.io/config.seen":"2024-10-26T01:57:22.539101137Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e/userdata","rootfs":"/var/lib/containers/storage/overlay/d87b4cb80111a5950685
1cb38ef6cddcde3c44b47825f62d427d0a36f937eb94/merged","created":"2024-10-26T01:57:34.734510696Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-10-26T01:57:22.492643353Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"a0f307d03f5ab1b21c66a93d0c1d2592\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.72.48:8443\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/poda0f307d03f5ab1b21c66a93d0c1d2592","io.kubernetes.cri-o.ContainerID":"e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-10-26T01:57:34.665377831Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-970804","io.kubernetes.cri
-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"a0f307d03f5ab1b21c66a93d0c1d2592\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-970804\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-970804_a0f307d03f5ab1b21c66a93d0c1d2592/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-970804\",\"uid\":\"a0f307d03f5ab1b21c66a93d0c1
d2592\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d87b4cb80111a59506851cb38ef6cddcde3c44b47825f62d427d0a36f937eb94/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e","io.kubernetes.cri-o.SandboxN
ame":"k8s_kube-apiserver-kubernetes-upgrade-970804_kube-system_a0f307d03f5ab1b21c66a93d0c1d2592_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-970804","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"a0f307d03f5ab1b21c66a93d0c1d2592","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.72.48:8443","kubernetes.io/config.hash":"a0f307d03f5ab1b21c66a93d0c1d2592","kubernetes.io/config.seen":"2024-10-26T01:57:22.492643353Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I1026 01:59:07.273992   61346 cri.go:126] list returned 15 containers
	I1026 01:59:07.274010   61346 cri.go:129] container: {ID:0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4 Status:stopped}
	I1026 01:59:07.274029   61346 cri.go:131] skipping 0686c24be11d5521376b9efd965302908ca70d739738c3d6eeecb78ef6c844f4 - not in ps
	I1026 01:59:07.274036   61346 cri.go:129] container: {ID:169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14 Status:stopped}
	I1026 01:59:07.274047   61346 cri.go:135] skipping {169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14 stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274060   61346 cri.go:129] container: {ID:45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d Status:stopped}
	I1026 01:59:07.274067   61346 cri.go:135] skipping {45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274077   61346 cri.go:129] container: {ID:504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a Status:stopped}
	I1026 01:59:07.274087   61346 cri.go:131] skipping 504e0d0eac535e3b9667656d378f285734645c6c9b1fa2be1a803983785ed01a - not in ps
	I1026 01:59:07.274094   61346 cri.go:129] container: {ID:67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7 Status:stopped}
	I1026 01:59:07.274102   61346 cri.go:135] skipping {67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7 stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274111   61346 cri.go:129] container: {ID:72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b Status:stopped}
	I1026 01:59:07.274120   61346 cri.go:131] skipping 72b3704c4d49e457c172adbbbae844fa3073622227ba5f371efcd0aaf0f8934b - not in ps
	I1026 01:59:07.274127   61346 cri.go:129] container: {ID:81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e Status:stopped}
	I1026 01:59:07.274138   61346 cri.go:135] skipping {81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274148   61346 cri.go:129] container: {ID:83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c Status:stopped}
	I1026 01:59:07.274155   61346 cri.go:135] skipping {83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274162   61346 cri.go:129] container: {ID:9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c Status:stopped}
	I1026 01:59:07.274174   61346 cri.go:131] skipping 9d2a8ffcf0af504f218597b8b2d7296fcb0f2d96fa39d08b9fe3ad93cbc8136c - not in ps
	I1026 01:59:07.274183   61346 cri.go:129] container: {ID:9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3 Status:stopped}
	I1026 01:59:07.274198   61346 cri.go:135] skipping {9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3 stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274206   61346 cri.go:129] container: {ID:b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464 Status:stopped}
	I1026 01:59:07.274215   61346 cri.go:131] skipping b1ecdd62a9317d5877cc08cdafadbd9f345eaec4d4f24d48786c0dfdd2e1d464 - not in ps
	I1026 01:59:07.274225   61346 cri.go:129] container: {ID:b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca Status:stopped}
	I1026 01:59:07.274234   61346 cri.go:135] skipping {b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca stopped}: state = "stopped", want "paused"
	I1026 01:59:07.274243   61346 cri.go:129] container: {ID:c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c Status:stopped}
	I1026 01:59:07.274255   61346 cri.go:131] skipping c82d403161a10956de0f70befba5f13c95b6e57ab4860fe3bdf3af7d1dfdc17c - not in ps
	I1026 01:59:07.274264   61346 cri.go:129] container: {ID:db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe Status:stopped}
	I1026 01:59:07.274273   61346 cri.go:131] skipping db248802badd01b46a83d3242371487988b26aa40d26fce7a98603bcd04e2fbe - not in ps
	I1026 01:59:07.274280   61346 cri.go:129] container: {ID:e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e Status:stopped}
	I1026 01:59:07.274289   61346 cri.go:131] skipping e3fca36b4dcf7dbb9b510ec100c934f41119fbe8292e5fb8acedd1dd4de16f3e - not in ps
	I1026 01:59:07.274336   61346 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:59:07.283819   61346 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 01:59:07.283837   61346 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 01:59:07.283877   61346 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 01:59:07.294313   61346 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 01:59:07.295559   61346 kubeconfig.go:125] found "kubernetes-upgrade-970804" server: "https://192.168.72.48:8443"
	I1026 01:59:07.297667   61346 kapi.go:59] client config for kubernetes-upgrade-970804: &rest.Config{Host:"https://192.168.72.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.crt", KeyFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kubernetes-upgrade-970804/client.key", CAFile:"/home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 01:59:07.298414   61346 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 01:59:07.308602   61346 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.48
	I1026 01:59:07.308625   61346 kubeadm.go:1160] stopping kube-system containers ...
	I1026 01:59:07.308635   61346 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 01:59:07.308683   61346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:59:07.348307   61346 cri.go:89] found id: "9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3"
	I1026 01:59:07.348329   61346 cri.go:89] found id: "67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7"
	I1026 01:59:07.348333   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 01:59:07.348337   61346 cri.go:89] found id: "83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c"
	I1026 01:59:07.348339   61346 cri.go:89] found id: "45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d"
	I1026 01:59:07.348342   61346 cri.go:89] found id: "b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca"
	I1026 01:59:07.348345   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 01:59:07.348347   61346 cri.go:89] found id: ""
	I1026 01:59:07.348354   61346 cri.go:252] Stopping containers: [9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3 67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14 83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c 45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 01:59:07.348410   61346 ssh_runner.go:195] Run: which crictl
	I1026 01:59:07.352282   61346 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 9ef3ccc3887e75fae4bad0625e44439a13cd4a620738ab6ea78ff7e5a6e547d3 67f09933420ad137465f9da1353f3c4956b885339d3cb2030fd971288baa57d7 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14 83d7179b935c7d73a452a22732460b980e09c3cb10d30830fa947debcd89ad3c 45e904cbce05f9b2ba918b078bca2d856c9c8f6d4ec3a3d39e09a09402a8c93d b27e02fd1f48fdae6cb40ff2e997ae22fe6f557329f0df2064c3a3a40d63dfca 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e
	I1026 01:59:07.426475   61346 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 01:59:07.470329   61346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:59:07.479884   61346 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Oct 26 01:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Oct 26 01:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Oct 26 01:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 26 01:57 /etc/kubernetes/scheduler.conf
	
	I1026 01:59:07.479943   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:59:07.488099   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:59:07.496666   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:59:07.504869   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1026 01:59:07.504926   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:59:07.513375   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:59:07.521379   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1026 01:59:07.521452   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:59:07.529757   61346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:59:07.538071   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 01:59:07.592189   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 01:59:08.956107   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.363880851s)
	I1026 01:59:08.956145   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 01:59:09.143709   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 01:59:09.208660   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 01:59:09.293742   61346 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:59:09.293826   61346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:59:09.794636   61346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:59:10.294203   61346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:59:10.306659   61346 api_server.go:72] duration metric: took 1.012916299s to wait for apiserver process to appear ...
	I1026 01:59:10.306688   61346 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:59:10.306712   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:15.307627   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 01:59:15.307670   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:20.308897   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 01:59:20.308947   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:25.309441   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 01:59:25.309485   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:30.309806   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 01:59:30.309841   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:30.966005   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": read tcp 192.168.72.1:33074->192.168.72.48:8443: read: connection reset by peer
	I1026 01:59:30.966074   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:30.966576   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:31.306965   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:31.307533   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:31.807084   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:31.807675   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:32.307258   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:32.307864   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:32.807462   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:32.808019   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:33.307091   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:33.307651   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:33.806997   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:33.807585   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:34.307152   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:34.307727   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:34.807343   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:34.808089   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:35.307249   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:35.307877   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:35.807488   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:35.808073   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:36.307720   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:36.308197   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:36.806763   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:36.807433   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:37.307285   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:37.307901   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:37.807496   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:37.808071   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:38.307104   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:38.307642   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:38.807156   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:38.807743   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:39.307255   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:39.307795   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:39.807436   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:39.808032   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:40.307645   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:40.308205   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:40.806756   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:40.807438   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:41.306984   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:41.307552   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:41.807114   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:41.807685   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:42.307253   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:42.307833   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:42.807462   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:42.808061   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:43.307375   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:43.307932   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:43.807557   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:43.808150   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:44.307741   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:44.308303   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:44.806917   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:44.807497   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:45.307033   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:45.307585   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:45.807140   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:45.807734   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:46.307240   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:46.307795   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:46.807414   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:46.807959   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:47.307700   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:47.308273   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:47.806838   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:47.807432   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:48.307701   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:48.308273   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:48.806800   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:48.807418   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:49.306930   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:49.307538   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:49.807103   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:49.807687   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:50.307260   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:50.307834   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:50.807555   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:50.808185   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:51.307847   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:51.308485   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:51.807050   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:51.807652   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:52.307259   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:52.307870   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:52.807507   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:52.808109   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:53.307422   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:53.308074   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:53.807680   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:53.808303   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:54.306846   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:54.307463   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:54.807048   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:54.807660   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:55.307201   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 01:59:55.307837   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 01:59:55.807461   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:00.808052   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:00:00.808120   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:05.808890   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:00:05.808942   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:10.809913   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:00:10.809975   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:10.810028   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:10.851306   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:10.851328   61346 cri.go:89] found id: "ed3fb8eb7909cf9227e4931a1a0166e3ec8986a742f4018ec6c9c71d00433376"
	I1026 02:00:10.851333   61346 cri.go:89] found id: ""
	I1026 02:00:10.851342   61346 logs.go:282] 2 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 ed3fb8eb7909cf9227e4931a1a0166e3ec8986a742f4018ec6c9c71d00433376]
	I1026 02:00:10.851404   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:10.855270   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:10.859051   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:10.859109   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:10.897183   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:10.897215   61346 cri.go:89] found id: ""
	I1026 02:00:10.897226   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:10.897286   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:10.901179   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:10.901247   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:10.935330   61346 cri.go:89] found id: ""
	I1026 02:00:10.935359   61346 logs.go:282] 0 containers: []
	W1026 02:00:10.935374   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:10.935384   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:10.935448   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:10.976837   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:10.976865   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:10.976870   61346 cri.go:89] found id: ""
	I1026 02:00:10.976880   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:10.976937   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:10.980881   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:10.984906   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:10.984968   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:11.031075   61346 cri.go:89] found id: ""
	I1026 02:00:11.031113   61346 logs.go:282] 0 containers: []
	W1026 02:00:11.031124   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:11.031136   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:11.031211   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:11.070754   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:11.070784   61346 cri.go:89] found id: "e985c8299afd90169beeba2e28868d98ef34db1b7f2a630e7a8f38f340fc150a"
	I1026 02:00:11.070791   61346 cri.go:89] found id: ""
	I1026 02:00:11.070799   61346 logs.go:282] 2 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3 e985c8299afd90169beeba2e28868d98ef34db1b7f2a630e7a8f38f340fc150a]
	I1026 02:00:11.070850   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:11.074786   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:11.078864   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:11.078932   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:11.114157   61346 cri.go:89] found id: ""
	I1026 02:00:11.114190   61346 logs.go:282] 0 containers: []
	W1026 02:00:11.114201   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:11.114209   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:11.114270   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:11.147155   61346 cri.go:89] found id: ""
	I1026 02:00:11.147185   61346 logs.go:282] 0 containers: []
	W1026 02:00:11.147193   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:11.147202   61346 logs.go:123] Gathering logs for kube-apiserver [ed3fb8eb7909cf9227e4931a1a0166e3ec8986a742f4018ec6c9c71d00433376] ...
	I1026 02:00:11.147215   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed3fb8eb7909cf9227e4931a1a0166e3ec8986a742f4018ec6c9c71d00433376"
	I1026 02:00:11.182026   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:11.182056   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:11.221641   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:11.221678   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:11.257844   61346 logs.go:123] Gathering logs for kube-controller-manager [e985c8299afd90169beeba2e28868d98ef34db1b7f2a630e7a8f38f340fc150a] ...
	I1026 02:00:11.257872   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e985c8299afd90169beeba2e28868d98ef34db1b7f2a630e7a8f38f340fc150a"
	I1026 02:00:11.291035   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:11.291064   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:11.389678   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:11.389714   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:11.425113   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:11.425142   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:11.479482   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:11.479521   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:11.515270   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:11.515300   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:11.783757   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:11.783797   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:11.821824   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:11.821854   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:11.835068   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:11.835103   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:00:16.698449   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.8633252s)
	W1026 02:00:16.698513   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47224->127.0.0.1:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47224->127.0.0.1:8443: read: connection reset by peer
	
	** /stderr **
	I1026 02:00:19.199471   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:19.200157   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:19.200203   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:19.200248   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:19.234561   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:19.234583   61346 cri.go:89] found id: ""
	I1026 02:00:19.234590   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:19.234632   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:19.238224   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:19.238292   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:19.278188   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:19.278208   61346 cri.go:89] found id: ""
	I1026 02:00:19.278215   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:19.278262   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:19.282394   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:19.282472   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:19.315036   61346 cri.go:89] found id: ""
	I1026 02:00:19.315069   61346 logs.go:282] 0 containers: []
	W1026 02:00:19.315079   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:19.315087   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:19.315143   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:19.353216   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:19.353238   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:19.353242   61346 cri.go:89] found id: ""
	I1026 02:00:19.353248   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:19.353294   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:19.357359   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:19.360852   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:19.360915   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:19.393888   61346 cri.go:89] found id: ""
	I1026 02:00:19.393918   61346 logs.go:282] 0 containers: []
	W1026 02:00:19.393927   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:19.393933   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:19.394006   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:19.430931   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:19.430953   61346 cri.go:89] found id: ""
	I1026 02:00:19.430963   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:19.431013   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:19.434848   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:19.434912   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:19.471155   61346 cri.go:89] found id: ""
	I1026 02:00:19.471180   61346 logs.go:282] 0 containers: []
	W1026 02:00:19.471187   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:19.471192   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:19.471242   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:19.504175   61346 cri.go:89] found id: ""
	I1026 02:00:19.504200   61346 logs.go:282] 0 containers: []
	W1026 02:00:19.504206   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:19.504217   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:19.504227   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:19.609103   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:19.609138   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:19.624379   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:19.624407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:19.699296   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:19.699327   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:19.699342   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:19.739929   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:19.739957   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:19.776665   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:19.776708   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:20.018658   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:20.018702   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:20.065041   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:20.065074   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:20.105996   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:20.106033   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:20.165507   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:20.165545   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:22.703225   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:22.703820   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:22.703867   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:22.703920   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:22.737294   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:22.737317   61346 cri.go:89] found id: ""
	I1026 02:00:22.737327   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:22.737392   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:22.740939   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:22.741007   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:22.776617   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:22.776646   61346 cri.go:89] found id: ""
	I1026 02:00:22.776656   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:22.776721   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:22.780447   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:22.780524   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:22.813340   61346 cri.go:89] found id: ""
	I1026 02:00:22.813370   61346 logs.go:282] 0 containers: []
	W1026 02:00:22.813380   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:22.813388   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:22.813462   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:22.845811   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:22.845834   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:22.845838   61346 cri.go:89] found id: ""
	I1026 02:00:22.845847   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:22.845901   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:22.849848   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:22.853671   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:22.853730   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:22.886048   61346 cri.go:89] found id: ""
	I1026 02:00:22.886074   61346 logs.go:282] 0 containers: []
	W1026 02:00:22.886081   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:22.886087   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:22.886137   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:22.918580   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:22.918600   61346 cri.go:89] found id: ""
	I1026 02:00:22.918607   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:22.918656   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:22.922148   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:22.922206   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:22.954787   61346 cri.go:89] found id: ""
	I1026 02:00:22.954811   61346 logs.go:282] 0 containers: []
	W1026 02:00:22.954819   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:22.954867   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:22.954921   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:22.988377   61346 cri.go:89] found id: ""
	I1026 02:00:22.988400   61346 logs.go:282] 0 containers: []
	W1026 02:00:22.988408   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:22.988423   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:22.988437   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:23.084457   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:23.084495   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:23.120799   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:23.120826   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:23.167491   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:23.167522   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:23.200726   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:23.200756   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:23.233634   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:23.233659   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:23.246699   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:23.246726   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:23.309519   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:23.309544   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:23.309560   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:23.378412   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:23.378451   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:23.604670   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:23.604706   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:26.148511   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:26.149216   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:26.149271   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:26.149316   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:26.183667   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:26.183686   61346 cri.go:89] found id: ""
	I1026 02:00:26.183706   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:26.183751   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:26.188548   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:26.188617   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:26.220889   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:26.220915   61346 cri.go:89] found id: ""
	I1026 02:00:26.220924   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:26.220978   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:26.224645   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:26.224697   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:26.257464   61346 cri.go:89] found id: ""
	I1026 02:00:26.257487   61346 logs.go:282] 0 containers: []
	W1026 02:00:26.257494   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:26.257499   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:26.257545   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:26.290366   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:26.290395   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:26.290401   61346 cri.go:89] found id: ""
	I1026 02:00:26.290410   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:26.290471   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:26.294135   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:26.297591   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:26.297653   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:26.330989   61346 cri.go:89] found id: ""
	I1026 02:00:26.331023   61346 logs.go:282] 0 containers: []
	W1026 02:00:26.331034   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:26.331041   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:26.331098   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:26.363889   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:26.363914   61346 cri.go:89] found id: ""
	I1026 02:00:26.363924   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:26.363975   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:26.367918   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:26.367982   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:26.399730   61346 cri.go:89] found id: ""
	I1026 02:00:26.399761   61346 logs.go:282] 0 containers: []
	W1026 02:00:26.399771   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:26.399778   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:26.399870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:26.432674   61346 cri.go:89] found id: ""
	I1026 02:00:26.432702   61346 logs.go:282] 0 containers: []
	W1026 02:00:26.432711   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:26.432732   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:26.432745   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:26.536936   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:26.536977   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:26.609310   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:26.609344   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:26.609361   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:26.646268   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:26.646295   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:26.713625   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:26.713655   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:26.746787   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:26.746814   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:26.965228   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:26.965272   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:27.003329   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:27.003366   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:27.016994   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:27.017024   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:27.058616   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:27.058653   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:29.592065   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:29.592764   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:29.592810   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:29.592863   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:29.627155   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:29.627175   61346 cri.go:89] found id: ""
	I1026 02:00:29.627183   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:29.627232   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:29.630828   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:29.630901   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:29.668819   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:29.668846   61346 cri.go:89] found id: ""
	I1026 02:00:29.668859   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:29.668904   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:29.672645   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:29.672701   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:29.709697   61346 cri.go:89] found id: ""
	I1026 02:00:29.709724   61346 logs.go:282] 0 containers: []
	W1026 02:00:29.709734   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:29.709740   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:29.709799   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:29.748209   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:29.748232   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:29.748237   61346 cri.go:89] found id: ""
	I1026 02:00:29.748245   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:29.748313   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:29.752214   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:29.756047   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:29.756110   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:29.793294   61346 cri.go:89] found id: ""
	I1026 02:00:29.793323   61346 logs.go:282] 0 containers: []
	W1026 02:00:29.793332   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:29.793337   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:29.793400   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:29.829381   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:29.829413   61346 cri.go:89] found id: ""
	I1026 02:00:29.829432   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:29.829483   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:29.833300   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:29.833357   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:29.866776   61346 cri.go:89] found id: ""
	I1026 02:00:29.866805   61346 logs.go:282] 0 containers: []
	W1026 02:00:29.866815   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:29.866823   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:29.866899   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:29.899787   61346 cri.go:89] found id: ""
	I1026 02:00:29.899826   61346 logs.go:282] 0 containers: []
	W1026 02:00:29.899835   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:29.899848   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:29.899861   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:29.912885   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:29.912914   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:29.978294   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:29.978315   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:29.978325   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:30.014420   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:30.014448   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:30.052292   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:30.052324   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:30.094691   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:30.094718   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:30.198943   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:30.198986   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:30.268368   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:30.268407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:30.304410   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:30.304448   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:30.337488   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:30.337516   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:33.066461   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:33.067083   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:33.067139   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:33.067194   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:33.101208   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:33.101229   61346 cri.go:89] found id: ""
	I1026 02:00:33.101236   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:33.101282   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:33.105049   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:33.105107   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:33.138304   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:33.138324   61346 cri.go:89] found id: ""
	I1026 02:00:33.138331   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:33.138376   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:33.142189   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:33.142250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:33.175041   61346 cri.go:89] found id: ""
	I1026 02:00:33.175064   61346 logs.go:282] 0 containers: []
	W1026 02:00:33.175073   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:33.175079   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:33.175127   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:33.208034   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:33.208053   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:33.208057   61346 cri.go:89] found id: ""
	I1026 02:00:33.208064   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:33.208110   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:33.211966   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:33.215370   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:33.215428   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:33.249597   61346 cri.go:89] found id: ""
	I1026 02:00:33.249621   61346 logs.go:282] 0 containers: []
	W1026 02:00:33.249628   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:33.249634   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:33.249681   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:33.290529   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:33.290551   61346 cri.go:89] found id: ""
	I1026 02:00:33.290559   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:33.290603   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:33.294555   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:33.294634   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:33.327321   61346 cri.go:89] found id: ""
	I1026 02:00:33.327347   61346 logs.go:282] 0 containers: []
	W1026 02:00:33.327357   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:33.327364   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:33.327435   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:33.363146   61346 cri.go:89] found id: ""
	I1026 02:00:33.363172   61346 logs.go:282] 0 containers: []
	W1026 02:00:33.363183   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:33.363202   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:33.363221   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:33.377684   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:33.377716   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:33.465840   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:33.465889   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:33.507524   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:33.507546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:33.746087   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:33.746121   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:33.844100   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:33.844134   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:33.886737   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:33.886766   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:33.931791   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:33.931835   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:33.973848   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:33.973877   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:34.023150   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:34.023176   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:34.093777   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:36.594036   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:36.594652   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:36.594705   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:36.594763   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:36.631467   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:36.631492   61346 cri.go:89] found id: ""
	I1026 02:00:36.631500   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:36.631550   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:36.635157   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:36.635210   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:36.667139   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:36.667169   61346 cri.go:89] found id: ""
	I1026 02:00:36.667180   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:36.667240   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:36.670723   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:36.670787   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:36.707007   61346 cri.go:89] found id: ""
	I1026 02:00:36.707034   61346 logs.go:282] 0 containers: []
	W1026 02:00:36.707042   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:36.707047   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:36.707095   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:36.743860   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:36.743885   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:36.743889   61346 cri.go:89] found id: ""
	I1026 02:00:36.743897   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:36.743947   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:36.747776   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:36.751403   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:36.751446   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:36.789208   61346 cri.go:89] found id: ""
	I1026 02:00:36.789234   61346 logs.go:282] 0 containers: []
	W1026 02:00:36.789242   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:36.789247   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:36.789291   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:36.825556   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:36.825578   61346 cri.go:89] found id: ""
	I1026 02:00:36.825587   61346 logs.go:282] 1 containers: [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:36.825638   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:36.829451   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:36.829510   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:36.868184   61346 cri.go:89] found id: ""
	I1026 02:00:36.868209   61346 logs.go:282] 0 containers: []
	W1026 02:00:36.868216   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:36.868222   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:36.868276   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:36.905919   61346 cri.go:89] found id: ""
	I1026 02:00:36.905949   61346 logs.go:282] 0 containers: []
	W1026 02:00:36.905957   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:36.905970   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:36.905984   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:36.955515   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:36.955541   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:36.992901   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:36.992932   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:37.005966   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:37.005997   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:37.067041   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:37.067066   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:37.067079   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:37.103376   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:37.103407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:37.331924   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:37.331961   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:37.372966   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:37.372996   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:37.473529   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:37.473573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:37.533525   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:37.533563   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:40.066574   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:40.067217   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:40.067261   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:40.067308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:40.101017   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:40.101041   61346 cri.go:89] found id: ""
	I1026 02:00:40.101048   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:40.101092   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.104707   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:40.104759   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:40.142358   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:40.142378   61346 cri.go:89] found id: ""
	I1026 02:00:40.142385   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:40.142431   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.146203   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:40.146252   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:40.177579   61346 cri.go:89] found id: ""
	I1026 02:00:40.177609   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.177621   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:40.177628   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:40.177684   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:40.211421   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:40.211443   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:40.211447   61346 cri.go:89] found id: ""
	I1026 02:00:40.211455   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:40.211515   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.215568   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.219045   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:40.219099   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:40.256172   61346 cri.go:89] found id: ""
	I1026 02:00:40.256204   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.256214   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:40.256222   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:40.256284   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:40.293701   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:40.293727   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:40.293733   61346 cri.go:89] found id: ""
	I1026 02:00:40.293742   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:40.293796   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.297882   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.301368   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:40.301438   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:40.332642   61346 cri.go:89] found id: ""
	I1026 02:00:40.332670   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.332678   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:40.332683   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:40.332732   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:40.362166   61346 cri.go:89] found id: ""
	I1026 02:00:40.362197   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.362208   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:40.362219   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:40.362236   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:40.420941   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:40.420978   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:40.455143   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:40.455167   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:40.557488   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:40.557525   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:40.571349   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:40.571420   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:40.636014   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:40.636042   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:40.636057   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:40.674054   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:40.674083   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:40.713408   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:40.713450   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:40.755851   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:40.755881   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:40.789022   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:40.789054   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:40.822310   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:40.822337   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:43.553807   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:43.554393   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:43.554448   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:43.554490   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:43.592938   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:43.592959   61346 cri.go:89] found id: ""
	I1026 02:00:43.592966   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:43.593024   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.596864   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:43.596927   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:43.629090   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:43.629114   61346 cri.go:89] found id: ""
	I1026 02:00:43.629124   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:43.629171   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.633092   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:43.633148   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:43.669478   61346 cri.go:89] found id: ""
	I1026 02:00:43.669504   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.669512   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:43.669517   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:43.669572   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:43.710108   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:43.710129   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:43.710134   61346 cri.go:89] found id: ""
	I1026 02:00:43.710140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:43.710192   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.714407   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.718062   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:43.718116   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:43.755191   61346 cri.go:89] found id: ""
	I1026 02:00:43.755217   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.755225   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:43.755231   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:43.755321   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:43.793558   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:43.793584   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:43.793590   61346 cri.go:89] found id: ""
	I1026 02:00:43.793597   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:43.793647   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.797642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.801080   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:43.801140   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:43.833471   61346 cri.go:89] found id: ""
	I1026 02:00:43.833500   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.833508   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:43.833513   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:43.833563   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:43.864528   61346 cri.go:89] found id: ""
	I1026 02:00:43.864556   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.864563   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:43.864571   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:43.864583   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:43.962636   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:43.962669   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:44.000853   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:44.000882   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:44.039677   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:44.039707   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:44.076095   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:44.076122   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:44.108731   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:44.108757   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:44.341157   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:44.341192   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:44.355030   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:44.355056   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:44.418910   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:44.418934   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:44.418952   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:44.477304   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:44.477338   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:44.511654   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:44.511684   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:47.048002   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:52.048804   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:00:52.048868   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:52.048920   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:52.083594   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:00:52.083616   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:52.083621   61346 cri.go:89] found id: ""
	I1026 02:00:52.083628   61346 logs.go:282] 2 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:52.083686   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.087792   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.091654   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:52.091722   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:52.125866   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:52.125893   61346 cri.go:89] found id: ""
	I1026 02:00:52.125900   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:52.125944   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.129585   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:52.129652   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:52.164516   61346 cri.go:89] found id: ""
	I1026 02:00:52.164539   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.164546   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:52.164552   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:52.164608   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:52.197457   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:52.197477   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:52.197481   61346 cri.go:89] found id: ""
	I1026 02:00:52.197488   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:52.197548   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.201279   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.204927   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:52.205001   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:52.239482   61346 cri.go:89] found id: ""
	I1026 02:00:52.239510   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.239520   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:52.239530   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:52.239595   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:52.277202   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:52.277225   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:52.277230   61346 cri.go:89] found id: ""
	I1026 02:00:52.277239   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:52.277299   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.281171   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.284923   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:52.284989   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:52.322882   61346 cri.go:89] found id: ""
	I1026 02:00:52.322912   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.322920   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:52.322925   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:52.322983   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:52.355215   61346 cri.go:89] found id: ""
	I1026 02:00:52.355240   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.355252   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:52.355260   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:52.355271   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:52.393632   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:52.393668   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:52.428737   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:52.428768   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:52.679756   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:52.679801   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:52.723243   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:52.723274   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:52.824393   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:52.824432   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:01:02.893223   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.068760889s)
	W1026 02:01:02.893273   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 02:01:02.893284   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:02.893304   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:02.935384   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:02.935419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:02.968519   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:01:02.968546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:03.001893   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:03.001929   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:03.015251   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:01:03.015284   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:01:03.052713   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:03.052746   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:05.613617   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:07.096121   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": read tcp 192.168.72.1:52856->192.168.72.48:8443: read: connection reset by peer
	I1026 02:01:07.096182   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:07.096236   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:07.142098   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:07.142125   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:01:07.142131   61346 cri.go:89] found id: ""
	I1026 02:01:07.142140   61346 logs.go:282] 2 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:01:07.142192   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.146063   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.149342   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:07.149390   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:07.180732   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:07.180757   61346 cri.go:89] found id: ""
	I1026 02:01:07.180765   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:07.180807   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.184449   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:07.184499   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:07.218220   61346 cri.go:89] found id: ""
	I1026 02:01:07.218244   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.218254   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:07.218262   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:07.218320   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:07.251857   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:07.251879   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:07.251884   61346 cri.go:89] found id: ""
	I1026 02:01:07.251892   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:07.251952   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.255585   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.258900   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:07.258948   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:07.290771   61346 cri.go:89] found id: ""
	I1026 02:01:07.290798   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.290808   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:07.290815   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:07.290874   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:07.322625   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:07.322650   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:07.322657   61346 cri.go:89] found id: ""
	I1026 02:01:07.322666   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:01:07.322734   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.326314   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.329628   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:07.329686   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:07.359973   61346 cri.go:89] found id: ""
	I1026 02:01:07.360000   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.360010   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:07.360017   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:07.360072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:07.391127   61346 cri.go:89] found id: ""
	I1026 02:01:07.391155   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.391162   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:07.391170   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:07.391181   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:07.451181   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:07.451219   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:07.487589   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:01:07.487630   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:07.520086   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:07.520114   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:07.822924   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:07.822962   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:07.922602   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:07.922640   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:07.991912   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:07.991945   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:07.991961   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:08.029392   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:01:08.029431   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	W1026 02:01:08.060876   61346 logs.go:130] failed kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343": Process exited with status 1
	stdout:
	
	stderr:
	E1026 02:01:08.052546    3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist" containerID="868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	time="2024-10-26T02:01:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 02:01:08.052546    3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist" containerID="868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	time="2024-10-26T02:01:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist"
	
	** /stderr **
	I1026 02:01:08.060899   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:08.060914   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:08.074181   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:08.074208   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:08.124123   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:08.124152   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:08.157059   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:08.157089   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:10.694623   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:10.695245   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:10.695296   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:10.695355   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:10.731272   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:10.731293   61346 cri.go:89] found id: ""
	I1026 02:01:10.731301   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:10.731357   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.735380   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:10.735440   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:10.772386   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:10.772406   61346 cri.go:89] found id: ""
	I1026 02:01:10.772413   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:10.772464   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.776121   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:10.776174   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:10.815627   61346 cri.go:89] found id: ""
	I1026 02:01:10.815659   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.815670   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:10.815677   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:10.815743   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:10.848752   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:10.848782   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:10.848788   61346 cri.go:89] found id: ""
	I1026 02:01:10.848797   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:10.848854   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.852529   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.856053   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:10.856107   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:10.888501   61346 cri.go:89] found id: ""
	I1026 02:01:10.888530   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.888538   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:10.888544   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:10.888598   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:10.921137   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:10.921162   61346 cri.go:89] found id: ""
	I1026 02:01:10.921171   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:10.921218   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.924867   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:10.924921   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:10.957320   61346 cri.go:89] found id: ""
	I1026 02:01:10.957348   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.957356   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:10.957362   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:10.957430   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:10.990595   61346 cri.go:89] found id: ""
	I1026 02:01:10.990640   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.990649   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:10.990661   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:10.990673   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:11.023482   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:11.023516   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:11.126657   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:11.126696   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:11.140676   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:11.140700   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:11.207177   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:11.207201   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:11.207217   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:11.248581   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:11.248611   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:11.285849   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:11.285874   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:11.321506   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:11.321535   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:11.385340   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:11.385374   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:11.418089   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:11.418115   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:14.174396   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:14.175123   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:14.175180   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:14.175231   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:14.207390   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:14.207415   61346 cri.go:89] found id: ""
	I1026 02:01:14.207426   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:14.207485   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.211295   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:14.211361   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:14.243130   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:14.243152   61346 cri.go:89] found id: ""
	I1026 02:01:14.243159   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:14.243202   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.246874   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:14.246937   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:14.279016   61346 cri.go:89] found id: ""
	I1026 02:01:14.279042   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.279050   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:14.279055   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:14.279107   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:14.310828   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:14.310854   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:14.310858   61346 cri.go:89] found id: ""
	I1026 02:01:14.310865   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:14.310909   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.314565   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.318093   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:14.318149   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:14.349167   61346 cri.go:89] found id: ""
	I1026 02:01:14.349188   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.349196   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:14.349201   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:14.349249   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:14.381183   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:14.381204   61346 cri.go:89] found id: ""
	I1026 02:01:14.381211   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:14.381255   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.384990   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:14.385052   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:14.417426   61346 cri.go:89] found id: ""
	I1026 02:01:14.417453   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.417460   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:14.417466   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:14.417522   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:14.451910   61346 cri.go:89] found id: ""
	I1026 02:01:14.451936   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.451943   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:14.451957   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:14.451974   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:14.485936   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:14.485964   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:14.526045   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:14.526070   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:14.590281   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:14.590314   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:14.623568   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:14.623593   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:14.655446   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:14.655474   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:14.893767   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:14.893809   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:14.995834   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:14.995872   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:15.009129   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:15.009156   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:15.070352   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:15.070379   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:15.070396   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:17.611835   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:17.612559   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:17.612623   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:17.612674   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:17.646568   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:17.646587   61346 cri.go:89] found id: ""
	I1026 02:01:17.646595   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:17.646642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.650559   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:17.650630   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:17.685397   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:17.685436   61346 cri.go:89] found id: ""
	I1026 02:01:17.685444   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:17.685490   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.689098   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:17.689152   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:17.720990   61346 cri.go:89] found id: ""
	I1026 02:01:17.721014   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.721021   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:17.721027   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:17.721075   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:17.751951   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:17.751974   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:17.751977   61346 cri.go:89] found id: ""
	I1026 02:01:17.751984   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:17.752028   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.755480   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.758823   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:17.758887   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:17.789928   61346 cri.go:89] found id: ""
	I1026 02:01:17.789962   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.789972   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:17.789979   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:17.790039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:17.822040   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:17.822067   61346 cri.go:89] found id: ""
	I1026 02:01:17.822077   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:17.822122   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.825667   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:17.825737   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:17.858203   61346 cri.go:89] found id: ""
	I1026 02:01:17.858231   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.858241   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:17.858248   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:17.858308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:17.890054   61346 cri.go:89] found id: ""
	I1026 02:01:17.890086   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.890095   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:17.890114   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:17.890130   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:17.952564   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:17.952614   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:18.193053   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:18.193090   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:18.206465   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:18.206493   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:18.267097   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:18.267125   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:18.267139   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:18.306389   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:18.306415   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:18.337145   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:18.337174   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:18.372122   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:18.372153   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:18.475552   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:18.475588   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:18.511441   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:18.511470   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.044536   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:21.045143   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:21.045196   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:21.045250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:21.088102   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:21.088129   61346 cri.go:89] found id: ""
	I1026 02:01:21.088139   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:21.088209   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.091854   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:21.091924   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:21.124836   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:21.124859   61346 cri.go:89] found id: ""
	I1026 02:01:21.124867   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:21.124923   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.128631   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:21.128694   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:21.161231   61346 cri.go:89] found id: ""
	I1026 02:01:21.161256   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.161264   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:21.161269   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:21.161317   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:21.197288   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:21.197316   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.197320   61346 cri.go:89] found id: ""
	I1026 02:01:21.197327   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:21.197376   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.201028   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.204408   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:21.204457   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:21.237679   61346 cri.go:89] found id: ""
	I1026 02:01:21.237706   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.237717   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:21.237724   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:21.237789   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:21.269050   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:21.269074   61346 cri.go:89] found id: ""
	I1026 02:01:21.269081   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:21.269132   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.272724   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:21.272783   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:21.305027   61346 cri.go:89] found id: ""
	I1026 02:01:21.305052   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.305063   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:21.305071   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:21.305135   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:21.340621   61346 cri.go:89] found id: ""
	I1026 02:01:21.340653   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.340663   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:21.340678   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:21.340692   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:21.378423   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:21.378454   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.412443   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:21.412471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:21.509369   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:21.509407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:21.572931   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:21.572963   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:21.572982   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:21.612893   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:21.612921   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:21.832618   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:21.832676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:21.868234   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:21.868266   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:21.880578   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:21.880603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:21.948394   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:21.948426   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:24.481168   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:24.481766   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:24.481817   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:24.481870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:24.516276   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:24.516301   61346 cri.go:89] found id: ""
	I1026 02:01:24.516309   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:24.516371   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.520160   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:24.520226   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:24.552991   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:24.553020   61346 cri.go:89] found id: ""
	I1026 02:01:24.553030   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:24.553090   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.556648   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:24.556707   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:24.592788   61346 cri.go:89] found id: ""
	I1026 02:01:24.592814   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.592823   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:24.592828   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:24.592877   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:24.625184   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:24.625215   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:24.625221   61346 cri.go:89] found id: ""
	I1026 02:01:24.625230   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:24.625287   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.628925   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.632271   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:24.632317   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:24.662915   61346 cri.go:89] found id: ""
	I1026 02:01:24.662945   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.662955   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:24.662963   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:24.663022   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:24.695636   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:24.695670   61346 cri.go:89] found id: ""
	I1026 02:01:24.695678   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:24.695736   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.699361   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:24.699421   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:24.735746   61346 cri.go:89] found id: ""
	I1026 02:01:24.735775   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.735785   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:24.735792   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:24.735842   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:24.767245   61346 cri.go:89] found id: ""
	I1026 02:01:24.767272   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.767280   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:24.767293   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:24.767305   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:24.831995   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:24.832021   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:24.832036   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:24.868647   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:24.868678   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:25.087247   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:25.087285   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:25.100575   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:25.100605   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:25.140826   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:25.140856   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:25.205409   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:25.205447   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:25.238529   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:25.238553   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:25.271413   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:25.271442   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:25.308405   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:25.308434   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:27.909036   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:27.909608   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:27.909656   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:27.909700   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:27.943040   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:27.943060   61346 cri.go:89] found id: ""
	I1026 02:01:27.943067   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:27.943124   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:27.946739   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:27.946800   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:27.978767   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:27.978797   61346 cri.go:89] found id: ""
	I1026 02:01:27.978806   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:27.978855   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:27.982503   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:27.982561   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:28.015044   61346 cri.go:89] found id: ""
	I1026 02:01:28.015072   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.015083   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:28.015090   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:28.015149   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:28.046707   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:28.046730   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:28.046734   61346 cri.go:89] found id: ""
	I1026 02:01:28.046742   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:28.046792   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.050468   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.053813   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:28.053877   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:28.085803   61346 cri.go:89] found id: ""
	I1026 02:01:28.085826   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.085833   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:28.085838   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:28.085902   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:28.120410   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:28.120436   61346 cri.go:89] found id: ""
	I1026 02:01:28.120444   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:28.120489   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.124294   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:28.124370   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:28.161259   61346 cri.go:89] found id: ""
	I1026 02:01:28.161285   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.161293   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:28.161298   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:28.161350   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:28.192909   61346 cri.go:89] found id: ""
	I1026 02:01:28.192940   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.192950   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:28.192967   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:28.192982   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:28.205380   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:28.205402   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:28.241602   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:28.241629   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:28.278331   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:28.278360   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:28.345248   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:28.345285   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:28.377443   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:28.377471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:28.419594   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:28.419621   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:28.517317   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:28.517353   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:28.578149   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:28.578172   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:28.578184   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:28.613440   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:28.613468   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:31.342919   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:31.343483   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:31.343531   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:31.343577   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:31.378101   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:31.378120   61346 cri.go:89] found id: ""
	I1026 02:01:31.378127   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:31.378172   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.381743   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:31.381817   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:31.412310   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:31.412333   61346 cri.go:89] found id: ""
	I1026 02:01:31.412340   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:31.412388   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.416091   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:31.416145   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:31.446692   61346 cri.go:89] found id: ""
	I1026 02:01:31.446719   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.446729   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:31.446736   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:31.446798   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:31.480115   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:31.480136   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:31.480142   61346 cri.go:89] found id: ""
	I1026 02:01:31.480150   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:31.480266   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.483932   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.487444   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:31.487511   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:31.520531   61346 cri.go:89] found id: ""
	I1026 02:01:31.520564   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.520576   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:31.520583   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:31.520636   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:31.557479   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:31.557507   61346 cri.go:89] found id: ""
	I1026 02:01:31.557516   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:31.557572   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.561176   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:31.561239   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:31.597815   61346 cri.go:89] found id: ""
	I1026 02:01:31.597837   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.597844   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:31.597850   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:31.597911   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:31.631625   61346 cri.go:89] found id: ""
	I1026 02:01:31.631652   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.631661   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:31.631671   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:31.631688   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:31.666058   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:31.666084   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:31.896870   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:31.896913   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:31.938222   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:31.938254   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:31.950983   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:31.951007   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:31.991748   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:31.991780   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:32.027912   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:32.027939   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:32.091564   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:32.091599   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:32.124532   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:32.124564   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:32.222001   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:32.222041   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:32.286134   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:34.787080   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:34.787656   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:34.787709   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:34.787757   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:34.820961   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:34.820980   61346 cri.go:89] found id: ""
	I1026 02:01:34.820987   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:34.821033   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.824625   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:34.824684   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:34.857704   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:34.857734   61346 cri.go:89] found id: ""
	I1026 02:01:34.857745   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:34.857803   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.861462   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:34.861524   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:34.895007   61346 cri.go:89] found id: ""
	I1026 02:01:34.895038   61346 logs.go:282] 0 containers: []
	W1026 02:01:34.895047   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:34.895053   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:34.895101   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:34.926650   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:34.926669   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:34.926673   61346 cri.go:89] found id: ""
	I1026 02:01:34.926679   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:34.926727   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.930412   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.933891   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:34.933955   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:34.967170   61346 cri.go:89] found id: ""
	I1026 02:01:34.967199   61346 logs.go:282] 0 containers: []
	W1026 02:01:34.967207   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:34.967214   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:34.967267   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:34.999176   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:34.999197   61346 cri.go:89] found id: ""
	I1026 02:01:34.999204   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:34.999256   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:35.003081   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:35.003140   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:35.034864   61346 cri.go:89] found id: ""
	I1026 02:01:35.034895   61346 logs.go:282] 0 containers: []
	W1026 02:01:35.034904   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:35.034910   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:35.034984   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:35.066649   61346 cri.go:89] found id: ""
	I1026 02:01:35.066679   61346 logs.go:282] 0 containers: []
	W1026 02:01:35.066687   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:35.066700   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:35.066717   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:35.105709   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:35.105737   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:35.346505   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:35.346540   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:35.450362   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:35.450396   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:35.463653   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:35.463678   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:35.526627   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:35.526660   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:35.526676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:35.558724   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:35.558756   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:35.600035   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:35.600061   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:35.635520   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:35.635546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:35.701957   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:35.701997   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:38.236696   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:38.237245   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:38.237290   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:38.237332   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:38.274939   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:38.274967   61346 cri.go:89] found id: ""
	I1026 02:01:38.274976   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:38.275026   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.278658   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:38.278714   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:38.311299   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:38.311320   61346 cri.go:89] found id: ""
	I1026 02:01:38.311327   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:38.311380   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.315221   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:38.315278   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:38.346651   61346 cri.go:89] found id: ""
	I1026 02:01:38.346682   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.346692   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:38.346699   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:38.346760   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:38.379260   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:38.379282   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:38.379286   61346 cri.go:89] found id: ""
	I1026 02:01:38.379292   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:38.379336   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.383048   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.386640   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:38.386688   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:38.418119   61346 cri.go:89] found id: ""
	I1026 02:01:38.418143   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.418150   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:38.418156   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:38.418205   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:38.449593   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:38.449617   61346 cri.go:89] found id: ""
	I1026 02:01:38.449624   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:38.449675   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.453336   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:38.453393   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:38.485785   61346 cri.go:89] found id: ""
	I1026 02:01:38.485817   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.485828   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:38.485834   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:38.485881   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:38.517273   61346 cri.go:89] found id: ""
	I1026 02:01:38.517298   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.517305   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:38.517316   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:38.517327   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:38.577625   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:38.577647   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:38.577671   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:38.642831   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:38.642865   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:38.675642   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:38.675667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:38.775725   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:38.775759   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:38.789346   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:38.789373   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:38.821294   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:38.821322   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:39.047451   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:39.047488   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:39.085242   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:39.085269   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:39.121161   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:39.121192   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:41.663167   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:41.663756   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:41.663810   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:41.663853   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:41.696060   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:41.696085   61346 cri.go:89] found id: ""
	I1026 02:01:41.696094   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:41.696156   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.699834   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:41.699900   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:41.736393   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:41.736418   61346 cri.go:89] found id: ""
	I1026 02:01:41.736426   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:41.736479   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.740126   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:41.740180   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:41.776330   61346 cri.go:89] found id: ""
	I1026 02:01:41.776355   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.776362   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:41.776367   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:41.776413   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:41.825109   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:41.825130   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:41.825134   61346 cri.go:89] found id: ""
	I1026 02:01:41.825140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:41.825193   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.828957   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.832393   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:41.832443   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:41.868232   61346 cri.go:89] found id: ""
	I1026 02:01:41.868258   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.868265   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:41.868270   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:41.868324   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:41.906489   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:41.906516   61346 cri.go:89] found id: ""
	I1026 02:01:41.906524   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:41.906571   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.910417   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:41.910478   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:41.946304   61346 cri.go:89] found id: ""
	I1026 02:01:41.946333   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.946342   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:41.946347   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:41.946414   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:41.983472   61346 cri.go:89] found id: ""
	I1026 02:01:41.983494   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.983501   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:41.983518   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:41.983532   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:42.030375   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:42.030407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:42.067393   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:42.067419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:42.104374   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:42.104399   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:42.337072   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:42.337109   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:42.442464   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:42.442497   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:42.458447   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:42.458471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:42.530643   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:42.530664   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:42.530676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:42.571944   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:42.571972   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:42.645825   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:42.645864   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:45.188832   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:45.189474   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:45.189524   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:45.189574   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:45.221642   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:45.221669   61346 cri.go:89] found id: ""
	I1026 02:01:45.221679   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:45.221740   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.225200   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:45.225250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:45.256641   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:45.256663   61346 cri.go:89] found id: ""
	I1026 02:01:45.256673   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:45.256736   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.260301   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:45.260356   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:45.298468   61346 cri.go:89] found id: ""
	I1026 02:01:45.298490   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.298498   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:45.298503   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:45.298560   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:45.336252   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:45.336273   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:45.336277   61346 cri.go:89] found id: ""
	I1026 02:01:45.336283   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:45.336336   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.340429   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.344395   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:45.344447   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:45.382133   61346 cri.go:89] found id: ""
	I1026 02:01:45.382157   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.382164   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:45.382170   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:45.382218   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:45.423921   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:45.423941   61346 cri.go:89] found id: ""
	I1026 02:01:45.423955   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:45.424001   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.427657   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:45.427723   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:45.459454   61346 cri.go:89] found id: ""
	I1026 02:01:45.459477   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.459485   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:45.459491   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:45.459544   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:45.493995   61346 cri.go:89] found id: ""
	I1026 02:01:45.494022   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.494030   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:45.494042   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:45.494053   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:45.558932   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:45.558956   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:45.558968   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:45.600269   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:45.600301   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:45.637631   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:45.637658   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:45.672455   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:45.672478   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:45.898144   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:45.898183   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:46.001553   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:46.001590   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:46.014584   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:46.014612   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:46.050070   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:46.050099   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:46.122012   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:46.122045   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:48.654559   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:48.655226   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:48.655278   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:48.655333   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:48.687657   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:48.687678   61346 cri.go:89] found id: ""
	I1026 02:01:48.687685   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:48.687731   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.691267   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:48.691328   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:48.722176   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:48.722203   61346 cri.go:89] found id: ""
	I1026 02:01:48.722214   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:48.722271   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.726029   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:48.726088   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:48.756765   61346 cri.go:89] found id: ""
	I1026 02:01:48.756789   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.756798   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:48.756805   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:48.756870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:48.789939   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:48.789972   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:48.789976   61346 cri.go:89] found id: ""
	I1026 02:01:48.789983   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:48.790041   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.793855   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.797178   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:48.797250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:48.828626   61346 cri.go:89] found id: ""
	I1026 02:01:48.828651   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.828658   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:48.828664   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:48.828712   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:48.864962   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:48.864984   61346 cri.go:89] found id: ""
	I1026 02:01:48.865007   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:48.865068   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.868946   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:48.869021   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:48.903366   61346 cri.go:89] found id: ""
	I1026 02:01:48.903388   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.903396   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:48.903402   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:48.903461   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:48.933488   61346 cri.go:89] found id: ""
	I1026 02:01:48.933521   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.933530   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:48.933543   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:48.933555   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:48.968710   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:48.968744   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:49.070033   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:49.070064   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:49.112803   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:49.112835   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:49.144343   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:49.144373   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:49.380238   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:49.380286   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:49.420714   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:49.420751   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:49.435215   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:49.435244   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:49.499051   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:49.499074   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:49.499087   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:49.535173   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:49.535204   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:52.102258   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:57.102653   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:01:57.102718   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:57.102770   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:57.137042   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:01:57.137069   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:57.137073   61346 cri.go:89] found id: ""
	I1026 02:01:57.137080   61346 logs.go:282] 2 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:57.137126   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.140841   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.144367   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:57.144418   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:57.180851   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:57.180889   61346 cri.go:89] found id: ""
	I1026 02:01:57.180896   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:57.180939   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.184825   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:57.184892   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:57.218887   61346 cri.go:89] found id: ""
	I1026 02:01:57.218921   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.218931   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:57.218939   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:57.219005   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:57.250967   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:57.250992   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:57.250999   61346 cri.go:89] found id: ""
	I1026 02:01:57.251007   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:57.251069   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.254949   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.258367   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:57.258422   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:57.289606   61346 cri.go:89] found id: ""
	I1026 02:01:57.289642   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.289650   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:57.289656   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:57.289717   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:57.321286   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:01:57.321312   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:57.321318   61346 cri.go:89] found id: ""
	I1026 02:01:57.321326   61346 logs.go:282] 2 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:57.321372   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.325150   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.328491   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:57.328544   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:57.359673   61346 cri.go:89] found id: ""
	I1026 02:01:57.359695   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.359702   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:57.359707   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:57.359761   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:57.396815   61346 cri.go:89] found id: ""
	I1026 02:01:57.396842   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.396849   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:57.396858   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:57.396875   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:57.411804   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:57.411830   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:02:07.483917   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072063845s)
	W1026 02:02:07.483960   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 02:02:07.483975   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:07.483988   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:07.521628   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:07.521658   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:07.552547   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:02:07.552573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:07.591042   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:07.591068   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:07.695732   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:07.695772   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:07.733457   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:07.733486   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:07.802864   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:07.802901   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:07.835577   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:07.835604   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:08.091616   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:08.091652   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:08.128075   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:02:08.128100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:02:10.663065   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:11.777091   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": read tcp 192.168.72.1:59572->192.168.72.48:8443: read: connection reset by peer
	I1026 02:02:11.777150   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:11.777200   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:11.820458   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:11.820483   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:02:11.820489   61346 cri.go:89] found id: ""
	I1026 02:02:11.820496   61346 logs.go:282] 2 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:02:11.820542   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.824677   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.828148   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:11.828213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:11.860806   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:11.860830   61346 cri.go:89] found id: ""
	I1026 02:02:11.860838   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:11.860888   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.864410   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:11.864467   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:11.895785   61346 cri.go:89] found id: ""
	I1026 02:02:11.895810   61346 logs.go:282] 0 containers: []
	W1026 02:02:11.895817   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:11.895823   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:11.895870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:11.931392   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:11.931416   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:11.931421   61346 cri.go:89] found id: ""
	I1026 02:02:11.931427   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:11.931477   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.938408   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.941713   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:11.941769   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:11.994793   61346 cri.go:89] found id: ""
	I1026 02:02:11.994822   61346 logs.go:282] 0 containers: []
	W1026 02:02:11.994833   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:11.994840   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:11.994900   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:12.028264   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:12.028286   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:12.028290   61346 cri.go:89] found id: ""
	I1026 02:02:12.028300   61346 logs.go:282] 2 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:02:12.028348   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:12.031897   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:12.035412   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:12.035466   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:12.066681   61346 cri.go:89] found id: ""
	I1026 02:02:12.066708   61346 logs.go:282] 0 containers: []
	W1026 02:02:12.066716   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:12.066722   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:12.066766   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:12.099138   61346 cri.go:89] found id: ""
	I1026 02:02:12.099161   61346 logs.go:282] 0 containers: []
	W1026 02:02:12.099168   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:12.099176   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:12.099189   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:12.133743   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:02:12.133769   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	W1026 02:02:12.165695   61346 logs.go:130] failed kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68": Process exited with status 1
	stdout:
	
	stderr:
	E1026 02:02:12.158231    5086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist" containerID="3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	time="2024-10-26T02:02:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 02:02:12.158231    5086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist" containerID="3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	time="2024-10-26T02:02:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist"
	
	** /stderr **
	I1026 02:02:12.165733   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:12.165751   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:12.198406   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:02:12.198435   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:12.230831   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:12.230859   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:12.508892   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:12.508931   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:12.552240   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:12.552266   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:12.653540   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:12.653583   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:12.668700   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:12.668725   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:12.738451   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:12.738474   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:12.738486   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:12.787014   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:12.787042   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:12.858583   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:12.858617   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:15.403465   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:15.404114   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:15.404171   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:15.404221   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:15.440283   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:15.440304   61346 cri.go:89] found id: ""
	I1026 02:02:15.440311   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:15.440358   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.444163   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:15.444207   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:15.482062   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:15.482087   61346 cri.go:89] found id: ""
	I1026 02:02:15.482097   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:15.482144   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.485868   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:15.485917   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:15.518077   61346 cri.go:89] found id: ""
	I1026 02:02:15.518105   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.518114   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:15.518122   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:15.518188   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:15.551232   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:15.551254   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:15.551260   61346 cri.go:89] found id: ""
	I1026 02:02:15.551267   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:15.551324   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.554964   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.558439   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:15.558489   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:15.595053   61346 cri.go:89] found id: ""
	I1026 02:02:15.595075   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.595083   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:15.595088   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:15.595133   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:15.627051   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:15.627072   61346 cri.go:89] found id: ""
	I1026 02:02:15.627081   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:15.627143   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.630841   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:15.630899   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:15.662237   61346 cri.go:89] found id: ""
	I1026 02:02:15.662263   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.662270   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:15.662276   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:15.662322   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:15.694582   61346 cri.go:89] found id: ""
	I1026 02:02:15.694607   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.694614   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:15.694632   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:15.694643   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:15.795538   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:15.795575   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:15.856869   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:15.856897   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:15.856909   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:15.896982   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:15.897012   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:15.930053   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:15.930080   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:16.205663   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:16.205705   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:16.242284   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:16.242311   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:16.255367   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:16.255394   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:16.291142   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:16.291170   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:16.360224   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:16.360257   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:18.895015   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:18.895672   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:18.895716   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:18.895765   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:18.929029   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:18.929057   61346 cri.go:89] found id: ""
	I1026 02:02:18.929071   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:18.929129   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:18.932722   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:18.932779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:18.964370   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:18.964393   61346 cri.go:89] found id: ""
	I1026 02:02:18.964402   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:18.964466   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:18.968062   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:18.968129   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:19.001916   61346 cri.go:89] found id: ""
	I1026 02:02:19.001943   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.001950   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:19.001956   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:19.002002   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:19.033576   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:19.033602   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:19.033607   61346 cri.go:89] found id: ""
	I1026 02:02:19.033614   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:19.033674   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.037391   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.040838   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:19.040901   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:19.073540   61346 cri.go:89] found id: ""
	I1026 02:02:19.073565   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.073572   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:19.073577   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:19.073622   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:19.108089   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:19.108114   61346 cri.go:89] found id: ""
	I1026 02:02:19.108123   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:19.108167   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.111887   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:19.111946   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:19.146400   61346 cri.go:89] found id: ""
	I1026 02:02:19.146432   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.146442   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:19.146450   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:19.146504   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:19.179780   61346 cri.go:89] found id: ""
	I1026 02:02:19.179811   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.179822   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:19.179840   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:19.179856   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:19.213669   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:19.213701   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:19.250015   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:19.250042   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:19.354985   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:19.355016   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:19.439524   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:19.439557   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:19.475428   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:19.475455   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:19.516451   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:19.516480   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:19.749926   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:19.749968   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:19.791625   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:19.791657   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:19.805157   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:19.805186   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:19.868578   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:22.369637   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:22.370240   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:22.370288   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:22.370343   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:22.403651   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:22.403680   61346 cri.go:89] found id: ""
	I1026 02:02:22.403691   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:22.403759   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.407572   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:22.407644   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:22.438929   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:22.438957   61346 cri.go:89] found id: ""
	I1026 02:02:22.438964   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:22.439016   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.442590   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:22.442642   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:22.476807   61346 cri.go:89] found id: ""
	I1026 02:02:22.476835   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.476843   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:22.476848   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:22.476895   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:22.509688   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:22.509719   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:22.509725   61346 cri.go:89] found id: ""
	I1026 02:02:22.509734   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:22.509793   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.513628   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.517162   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:22.517213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:22.548953   61346 cri.go:89] found id: ""
	I1026 02:02:22.548978   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.548987   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:22.548993   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:22.549049   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:22.582352   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:22.582372   61346 cri.go:89] found id: ""
	I1026 02:02:22.582379   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:22.582425   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.586291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:22.586343   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:22.617896   61346 cri.go:89] found id: ""
	I1026 02:02:22.617919   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.617928   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:22.617935   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:22.617997   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:22.649592   61346 cri.go:89] found id: ""
	I1026 02:02:22.649620   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.649636   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:22.649653   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:22.649667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:22.681588   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:22.681615   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:22.910716   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:22.910753   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:22.972357   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:22.972383   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:22.972398   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:23.009349   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:23.009376   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:23.046544   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:23.046573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:23.113784   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:23.113819   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:23.218951   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:23.218990   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:23.232688   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:23.232716   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:23.265609   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:23.265634   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:25.809260   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:25.809924   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:25.809978   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:25.810026   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:25.842996   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:25.843018   61346 cri.go:89] found id: ""
	I1026 02:02:25.843026   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:25.843071   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.846813   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:25.846870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:25.879374   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:25.879395   61346 cri.go:89] found id: ""
	I1026 02:02:25.879403   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:25.879449   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.883367   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:25.883429   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:25.916515   61346 cri.go:89] found id: ""
	I1026 02:02:25.916552   61346 logs.go:282] 0 containers: []
	W1026 02:02:25.916565   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:25.916573   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:25.916638   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:25.949559   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:25.949581   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:25.949586   61346 cri.go:89] found id: ""
	I1026 02:02:25.949592   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:25.949637   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.953333   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.956778   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:25.956843   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:25.989764   61346 cri.go:89] found id: ""
	I1026 02:02:25.989788   61346 logs.go:282] 0 containers: []
	W1026 02:02:25.989796   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:25.989802   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:25.989851   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:26.025336   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:26.025356   61346 cri.go:89] found id: ""
	I1026 02:02:26.025365   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:26.025431   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:26.029006   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:26.029067   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:26.061030   61346 cri.go:89] found id: ""
	I1026 02:02:26.061055   61346 logs.go:282] 0 containers: []
	W1026 02:02:26.061062   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:26.061069   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:26.061123   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:26.093721   61346 cri.go:89] found id: ""
	I1026 02:02:26.093745   61346 logs.go:282] 0 containers: []
	W1026 02:02:26.093755   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:26.093768   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:26.093778   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:26.125693   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:26.125717   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:26.161383   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:26.161410   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:26.199392   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:26.199419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:26.267481   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:26.267513   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:26.328261   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:26.328288   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:26.328300   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:26.361570   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:26.361603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:26.579535   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:26.579573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:26.619047   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:26.619075   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:26.725765   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:26.725799   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:29.239446   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:29.240070   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:29.240131   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:29.240182   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:29.276196   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:29.276221   61346 cri.go:89] found id: ""
	I1026 02:02:29.276231   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:29.276280   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.280051   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:29.280117   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:29.316260   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:29.316281   61346 cri.go:89] found id: ""
	I1026 02:02:29.316288   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:29.316346   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.320038   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:29.320104   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:29.353542   61346 cri.go:89] found id: ""
	I1026 02:02:29.353572   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.353580   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:29.353586   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:29.353638   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:29.393524   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:29.393544   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:29.393547   61346 cri.go:89] found id: ""
	I1026 02:02:29.393554   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:29.393600   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.397227   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.400632   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:29.400688   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:29.432303   61346 cri.go:89] found id: ""
	I1026 02:02:29.432326   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.432334   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:29.432339   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:29.432395   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:29.465199   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:29.465219   61346 cri.go:89] found id: ""
	I1026 02:02:29.465226   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:29.465272   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.469249   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:29.469308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:29.503144   61346 cri.go:89] found id: ""
	I1026 02:02:29.503170   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.503178   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:29.503184   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:29.503232   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:29.536928   61346 cri.go:89] found id: ""
	I1026 02:02:29.536955   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.536963   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:29.536977   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:29.536991   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:29.599022   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:29.599042   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:29.599055   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:29.668945   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:29.668980   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:29.702721   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:29.702753   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:29.930599   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:29.930648   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:29.973388   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:29.973438   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:30.076853   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:30.076892   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:30.090433   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:30.090458   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:30.125968   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:30.125994   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:30.163546   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:30.163576   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:32.698485   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:32.699097   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:32.699145   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:32.699189   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:32.733585   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:32.733612   61346 cri.go:89] found id: ""
	I1026 02:02:32.733622   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:32.733684   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.737320   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:32.737375   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:32.769567   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:32.769589   61346 cri.go:89] found id: ""
	I1026 02:02:32.769596   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:32.769645   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.773255   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:32.773331   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:32.805727   61346 cri.go:89] found id: ""
	I1026 02:02:32.805756   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.805765   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:32.805777   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:32.805842   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:32.839199   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:32.839218   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:32.839222   61346 cri.go:89] found id: ""
	I1026 02:02:32.839229   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:32.839271   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.842886   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.846126   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:32.846182   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:32.878672   61346 cri.go:89] found id: ""
	I1026 02:02:32.878700   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.878710   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:32.878718   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:32.878769   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:32.915524   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:32.915549   61346 cri.go:89] found id: ""
	I1026 02:02:32.915558   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:32.915613   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.919431   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:32.919492   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:32.952460   61346 cri.go:89] found id: ""
	I1026 02:02:32.952489   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.952500   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:32.952506   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:32.952551   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:32.985156   61346 cri.go:89] found id: ""
	I1026 02:02:32.985183   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.985191   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:32.985206   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:32.985218   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:33.205658   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:33.205693   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:33.315001   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:33.315038   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:33.382645   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:33.382670   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:33.382682   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:33.454153   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:33.454188   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:33.487804   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:33.487834   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:33.521200   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:33.521236   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:33.534212   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:33.534243   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:33.570941   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:33.570973   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:33.609836   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:33.609868   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:36.151548   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:36.152194   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:36.152241   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:36.152288   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:36.186165   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:36.186190   61346 cri.go:89] found id: ""
	I1026 02:02:36.186198   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:36.186258   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.190006   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:36.190072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:36.221821   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:36.221840   61346 cri.go:89] found id: ""
	I1026 02:02:36.221847   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:36.221903   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.225739   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:36.225798   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:36.257132   61346 cri.go:89] found id: ""
	I1026 02:02:36.257158   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.257165   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:36.257170   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:36.257216   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:36.290728   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:36.290750   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:36.290756   61346 cri.go:89] found id: ""
	I1026 02:02:36.290765   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:36.290824   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.294642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.298105   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:36.298176   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:36.328680   61346 cri.go:89] found id: ""
	I1026 02:02:36.328706   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.328714   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:36.328719   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:36.328779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:36.360650   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:36.360673   61346 cri.go:89] found id: ""
	I1026 02:02:36.360683   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:36.360740   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.364455   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:36.364528   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:36.397049   61346 cri.go:89] found id: ""
	I1026 02:02:36.397080   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.397090   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:36.397098   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:36.397159   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:36.428657   61346 cri.go:89] found id: ""
	I1026 02:02:36.428682   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.428692   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:36.428708   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:36.428722   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:36.655812   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:36.655850   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:36.701057   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:36.701080   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:36.810072   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:36.810110   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:36.851029   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:36.851059   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:36.884147   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:36.884176   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:36.964433   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:36.964467   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:36.997887   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:36.997913   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:37.011320   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:37.011351   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:37.073351   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:37.073372   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:37.073388   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:39.616125   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:39.616763   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:39.616809   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:39.616859   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:39.650718   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:39.650741   61346 cri.go:89] found id: ""
	I1026 02:02:39.650747   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:39.650803   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.654856   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:39.654918   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:39.687829   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:39.687855   61346 cri.go:89] found id: ""
	I1026 02:02:39.687862   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:39.687916   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.691736   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:39.691813   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:39.725456   61346 cri.go:89] found id: ""
	I1026 02:02:39.725478   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.725486   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:39.725492   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:39.725543   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:39.758138   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:39.758203   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:39.758215   61346 cri.go:89] found id: ""
	I1026 02:02:39.758223   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:39.758288   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.762009   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.765676   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:39.765728   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:39.797015   61346 cri.go:89] found id: ""
	I1026 02:02:39.797046   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.797054   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:39.797060   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:39.797120   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:39.828873   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:39.828899   61346 cri.go:89] found id: ""
	I1026 02:02:39.828908   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:39.828968   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.832708   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:39.832761   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:39.865055   61346 cri.go:89] found id: ""
	I1026 02:02:39.865085   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.865095   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:39.865103   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:39.865172   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:39.896749   61346 cri.go:89] found id: ""
	I1026 02:02:39.896776   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.896784   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:39.896795   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:39.896810   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:39.909739   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:39.909769   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:39.974509   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:39.974534   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:39.974546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:40.011144   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:40.011177   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:40.042751   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:40.042782   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:40.286733   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:40.286777   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:40.395108   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:40.395144   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:40.433276   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:40.433310   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:40.502277   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:40.502316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:40.535877   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:40.535907   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:43.076004   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:43.076575   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:43.076644   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:43.076703   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:43.110248   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:43.110271   61346 cri.go:89] found id: ""
	I1026 02:02:43.110279   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:43.110324   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.114088   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:43.114143   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:43.148461   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:43.148484   61346 cri.go:89] found id: ""
	I1026 02:02:43.148491   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:43.148536   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.152157   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:43.152214   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:43.183703   61346 cri.go:89] found id: ""
	I1026 02:02:43.183736   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.183746   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:43.183753   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:43.183814   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:43.217197   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:43.217223   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:43.217229   61346 cri.go:89] found id: ""
	I1026 02:02:43.217237   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:43.217300   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.220997   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.224329   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:43.224375   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:43.256892   61346 cri.go:89] found id: ""
	I1026 02:02:43.256921   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.256928   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:43.256934   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:43.256995   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:43.290558   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:43.290610   61346 cri.go:89] found id: ""
	I1026 02:02:43.290621   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:43.290676   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.294453   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:43.294528   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:43.326405   61346 cri.go:89] found id: ""
	I1026 02:02:43.326433   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.326440   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:43.326445   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:43.326496   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:43.358535   61346 cri.go:89] found id: ""
	I1026 02:02:43.358567   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.358578   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:43.358595   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:43.358609   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:43.461667   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:43.461704   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:43.500697   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:43.500728   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:43.573581   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:43.573631   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:43.606316   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:43.606343   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:43.645077   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:43.645106   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:43.658729   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:43.658762   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:43.719053   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:43.719083   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:43.719100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:43.755287   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:43.755316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:43.788902   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:43.788933   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:46.511867   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:46.512465   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:46.512510   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:46.512556   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:46.548273   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:46.548297   61346 cri.go:89] found id: ""
	I1026 02:02:46.548304   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:46.548347   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.552088   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:46.552138   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:46.584097   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:46.584119   61346 cri.go:89] found id: ""
	I1026 02:02:46.584127   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:46.584181   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.588008   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:46.588072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:46.620524   61346 cri.go:89] found id: ""
	I1026 02:02:46.620548   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.620557   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:46.620562   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:46.620618   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:46.658098   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:46.658126   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:46.658132   61346 cri.go:89] found id: ""
	I1026 02:02:46.658140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:46.658199   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.661881   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.665176   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:46.665225   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:46.696936   61346 cri.go:89] found id: ""
	I1026 02:02:46.696964   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.696971   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:46.696977   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:46.697039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:46.729366   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:46.729389   61346 cri.go:89] found id: ""
	I1026 02:02:46.729396   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:46.729466   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.733337   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:46.733467   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:46.764260   61346 cri.go:89] found id: ""
	I1026 02:02:46.764282   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.764290   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:46.764296   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:46.764344   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:46.797523   61346 cri.go:89] found id: ""
	I1026 02:02:46.797548   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.797557   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:46.797567   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:46.797579   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:46.909622   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:46.909659   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:46.974670   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:46.974695   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:46.974709   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:47.018707   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:47.018743   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:47.051128   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:47.051155   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:47.281134   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:47.281179   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:47.295219   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:47.295256   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:47.329525   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:47.329555   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:47.404243   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:47.404280   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:47.440107   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:47.440141   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:49.978053   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:49.978652   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:49.978704   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:49.978764   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:50.013089   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:50.013119   61346 cri.go:89] found id: ""
	I1026 02:02:50.013129   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:50.013190   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.017006   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:50.017088   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:50.048872   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:50.048897   61346 cri.go:89] found id: ""
	I1026 02:02:50.048906   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:50.048967   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.052557   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:50.052635   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:50.084898   61346 cri.go:89] found id: ""
	I1026 02:02:50.084928   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.084936   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:50.084942   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:50.084989   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:50.116188   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:50.116212   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:50.116218   61346 cri.go:89] found id: ""
	I1026 02:02:50.116226   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:50.116270   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.119872   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.123212   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:50.123275   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:50.160590   61346 cri.go:89] found id: ""
	I1026 02:02:50.160621   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.160632   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:50.160640   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:50.160689   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:50.192979   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:50.192999   61346 cri.go:89] found id: ""
	I1026 02:02:50.193006   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:50.193051   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.196593   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:50.196660   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:50.226322   61346 cri.go:89] found id: ""
	I1026 02:02:50.226349   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.226358   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:50.226366   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:50.226416   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:50.257832   61346 cri.go:89] found id: ""
	I1026 02:02:50.257856   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.257863   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:50.257877   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:50.257890   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:50.302398   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:50.302424   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:50.379629   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:50.379667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:50.415070   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:50.415100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:50.629087   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:50.629123   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:50.740093   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:50.740133   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:50.757252   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:50.757277   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:50.824893   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:50.824918   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:50.824929   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:50.866786   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:50.866812   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:50.905577   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:50.905603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:53.443026   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:53.443662   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:53.443713   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:53.443759   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:53.483805   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:53.483824   61346 cri.go:89] found id: ""
	I1026 02:02:53.483831   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:53.483890   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.487896   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:53.487953   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:53.524571   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:53.524597   61346 cri.go:89] found id: ""
	I1026 02:02:53.524605   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:53.524680   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.528250   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:53.528319   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:53.559251   61346 cri.go:89] found id: ""
	I1026 02:02:53.559278   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.559286   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:53.559291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:53.559337   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:53.591011   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:53.591031   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:53.591035   61346 cri.go:89] found id: ""
	I1026 02:02:53.591041   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:53.591087   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.594869   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.598201   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:53.598254   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:53.631253   61346 cri.go:89] found id: ""
	I1026 02:02:53.631278   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.631288   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:53.631295   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:53.631356   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:53.663634   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:53.663657   61346 cri.go:89] found id: ""
	I1026 02:02:53.663668   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:53.663712   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.667626   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:53.667681   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:53.698818   61346 cri.go:89] found id: ""
	I1026 02:02:53.698847   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.698854   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:53.698859   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:53.698906   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:53.732094   61346 cri.go:89] found id: ""
	I1026 02:02:53.732122   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.732129   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:53.732141   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:53.732151   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:53.770127   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:53.770155   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:53.881427   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:53.881464   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:53.947809   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:53.947838   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:53.947855   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:53.989091   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:53.989125   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:54.022240   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:54.022268   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:54.054505   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:54.054535   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:54.271043   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:54.271078   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:54.284024   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:54.284049   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:54.323290   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:54.323321   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:56.896246   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:56.896816   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:56.896862   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:56.896910   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:56.933652   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:56.933672   61346 cri.go:89] found id: ""
	I1026 02:02:56.933679   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:56.933729   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:56.937440   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:56.937481   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:56.968265   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:56.968291   61346 cri.go:89] found id: ""
	I1026 02:02:56.968301   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:56.968355   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:56.971944   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:56.972014   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:57.003503   61346 cri.go:89] found id: ""
	I1026 02:02:57.003534   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.003552   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:57.003559   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:57.003612   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:57.034482   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:57.034507   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:57.034513   61346 cri.go:89] found id: ""
	I1026 02:02:57.034521   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:57.034576   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.038273   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.041648   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:57.041704   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:57.075834   61346 cri.go:89] found id: ""
	I1026 02:02:57.075862   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.075880   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:57.075886   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:57.075938   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:57.109328   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:57.109351   61346 cri.go:89] found id: ""
	I1026 02:02:57.109358   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:57.109406   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.112981   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:57.113039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:57.149339   61346 cri.go:89] found id: ""
	I1026 02:02:57.149361   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.149369   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:57.149374   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:57.149430   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:57.181958   61346 cri.go:89] found id: ""
	I1026 02:02:57.181985   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.181993   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:57.182005   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:57.182017   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:57.244186   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:57.244202   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:57.244218   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:57.318673   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:57.318705   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:57.332355   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:57.332390   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:57.368229   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:57.368260   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:57.404874   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:57.404905   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:57.439449   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:57.439476   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:57.470512   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:57.470541   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:57.700847   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:57.700889   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:57.746463   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:57.746493   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:00.357608   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:00.358300   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:00.358361   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:00.358420   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:00.394371   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:00.394395   61346 cri.go:89] found id: ""
	I1026 02:03:00.394403   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:00.394458   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.398160   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:00.398213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:00.429928   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:00.429955   61346 cri.go:89] found id: ""
	I1026 02:03:00.429965   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:00.430021   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.433716   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:00.433779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:00.468245   61346 cri.go:89] found id: ""
	I1026 02:03:00.468272   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.468279   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:00.468285   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:00.468333   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:00.502849   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:00.502882   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:00.502888   61346 cri.go:89] found id: ""
	I1026 02:03:00.502898   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:00.502956   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.506808   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.510244   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:00.510300   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:00.550741   61346 cri.go:89] found id: ""
	I1026 02:03:00.550774   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.550784   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:00.550791   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:00.550857   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:00.583299   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:00.583329   61346 cri.go:89] found id: ""
	I1026 02:03:00.583339   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:00.583395   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.587382   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:00.587449   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:00.620327   61346 cri.go:89] found id: ""
	I1026 02:03:00.620355   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.620364   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:00.620369   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:00.620422   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:00.653640   61346 cri.go:89] found id: ""
	I1026 02:03:00.653674   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.653684   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:00.653702   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:00.653716   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:00.891000   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:00.891039   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:01.005914   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:01.005950   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:01.040332   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:01.040363   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:01.079913   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:01.079949   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:01.159277   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:01.159313   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:01.193303   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:01.193330   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:01.225091   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:01.225116   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:01.238709   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:01.238748   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:01.306043   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:01.306068   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:01.306082   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:03.842485   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:03.843131   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:03.843190   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:03.843238   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:03.877861   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:03.877899   61346 cri.go:89] found id: ""
	I1026 02:03:03.877907   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:03.877969   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.881614   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:03.881674   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:03.912267   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:03.912288   61346 cri.go:89] found id: ""
	I1026 02:03:03.912296   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:03.912345   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.916002   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:03.916068   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:03.953650   61346 cri.go:89] found id: ""
	I1026 02:03:03.953680   61346 logs.go:282] 0 containers: []
	W1026 02:03:03.953690   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:03.953697   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:03.953745   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:03.985924   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:03.985945   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:03.985949   61346 cri.go:89] found id: ""
	I1026 02:03:03.985955   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:03.986000   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.989679   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.992904   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:03.992975   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:04.026028   61346 cri.go:89] found id: ""
	I1026 02:03:04.026051   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.026059   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:04.026064   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:04.026119   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:04.057365   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:04.057386   61346 cri.go:89] found id: ""
	I1026 02:03:04.057394   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:04.057459   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:04.060937   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:04.060990   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:04.091927   61346 cri.go:89] found id: ""
	I1026 02:03:04.091954   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.091964   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:04.091972   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:04.092033   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:04.126411   61346 cri.go:89] found id: ""
	I1026 02:03:04.126440   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.126450   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:04.126463   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:04.126474   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:04.139393   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:04.139418   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:04.203573   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:04.203604   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:04.203625   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:04.239564   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:04.239594   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:04.275438   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:04.275465   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:04.307496   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:04.307521   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:04.345604   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:04.345636   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:04.455278   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:04.455316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:04.499032   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:04.499062   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:04.571494   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:04.571532   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:07.300160   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:07.300782   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:07.300840   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:07.300889   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:07.340367   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:07.340386   61346 cri.go:89] found id: ""
	I1026 02:03:07.340393   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:07.340438   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.344049   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:07.344122   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:07.375434   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:07.375461   61346 cri.go:89] found id: ""
	I1026 02:03:07.375471   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:07.375525   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.379051   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:07.379117   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:07.411223   61346 cri.go:89] found id: ""
	I1026 02:03:07.411251   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.411261   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:07.411268   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:07.411331   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:07.443527   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:07.443547   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:07.443550   61346 cri.go:89] found id: ""
	I1026 02:03:07.443557   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:07.443604   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.447208   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.450644   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:07.450701   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:07.482703   61346 cri.go:89] found id: ""
	I1026 02:03:07.482727   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.482735   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:07.482740   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:07.482782   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:07.518953   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:07.518986   61346 cri.go:89] found id: ""
	I1026 02:03:07.518995   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:07.519051   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.522859   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:07.522928   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:07.555058   61346 cri.go:89] found id: ""
	I1026 02:03:07.555083   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.555091   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:07.555100   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:07.555148   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:07.588175   61346 cri.go:89] found id: ""
	I1026 02:03:07.588209   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.588221   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:07.588238   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:07.588252   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:07.626373   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:07.626404   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:07.708119   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:07.708152   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:07.740472   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:07.740497   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:07.780052   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:07.780079   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:07.816183   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:07.816210   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:08.052390   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:08.052426   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:08.095417   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:08.095454   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:08.216568   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:08.216618   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:08.230951   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:08.230979   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:08.297568   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:10.798588   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:10.799205   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:10.799274   61346 kubeadm.go:597] duration metric: took 4m3.515432127s to restartPrimaryControlPlane
	W1026 02:03:10.799354   61346 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 02:03:10.799383   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:03:11.501079   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:03:11.518136   61346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:03:11.527605   61346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:03:11.536460   61346 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:03:11.536479   61346 kubeadm.go:157] found existing configuration files:
	
	I1026 02:03:11.536523   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:03:11.544839   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:03:11.544889   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:03:11.553411   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:03:11.561760   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:03:11.561802   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:03:11.570406   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:03:11.578760   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:03:11.578822   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:03:11.588073   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:03:11.596725   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:03:11.596776   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:03:11.605769   61346 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:03:11.648887   61346 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:03:11.648956   61346 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:03:11.753470   61346 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:03:11.753649   61346 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:03:11.753759   61346 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:03:11.761141   61346 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:03:11.763476   61346 out.go:235]   - Generating certificates and keys ...
	I1026 02:03:11.763567   61346 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:03:11.763620   61346 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:03:11.763704   61346 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:03:11.763781   61346 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:03:11.763863   61346 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:03:11.763910   61346 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:03:11.763967   61346 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:03:11.764057   61346 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:03:11.764184   61346 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:03:11.764287   61346 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:03:11.764341   61346 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:03:11.764429   61346 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:03:11.893012   61346 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:03:12.350442   61346 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:03:12.597456   61346 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:03:12.817591   61346 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:03:12.974600   61346 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:03:12.975140   61346 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:03:12.980868   61346 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:03:12.982664   61346 out.go:235]   - Booting up control plane ...
	I1026 02:03:12.982772   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:03:12.982838   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:03:12.982894   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:03:13.006027   61346 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:03:13.014869   61346 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:03:13.014965   61346 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:03:13.148493   61346 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:03:13.148661   61346 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:03:14.150279   61346 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001868121s
	I1026 02:03:14.150400   61346 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:07:14.152750   61346 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000457142s
	I1026 02:07:14.152797   61346 kubeadm.go:310] 
	I1026 02:07:14.152845   61346 kubeadm.go:310] Unfortunately, an error has occurred:
	I1026 02:07:14.152890   61346 kubeadm.go:310] 	context deadline exceeded
	I1026 02:07:14.152898   61346 kubeadm.go:310] 
	I1026 02:07:14.152948   61346 kubeadm.go:310] This error is likely caused by:
	I1026 02:07:14.152997   61346 kubeadm.go:310] 	- The kubelet is not running
	I1026 02:07:14.153161   61346 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:07:14.153195   61346 kubeadm.go:310] 
	I1026 02:07:14.153316   61346 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:07:14.153347   61346 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1026 02:07:14.153385   61346 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1026 02:07:14.153392   61346 kubeadm.go:310] 
	I1026 02:07:14.153519   61346 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:07:14.153622   61346 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:07:14.153730   61346 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1026 02:07:14.153852   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:07:14.153964   61346 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1026 02:07:14.154080   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:07:14.154590   61346 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:07:14.154741   61346 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1026 02:07:14.154843   61346 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 02:07:14.155012   61346 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001868121s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000457142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001868121s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000457142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 02:07:14.155068   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:07:14.845139   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:07:14.859829   61346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:07:14.869581   61346 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:07:14.869605   61346 kubeadm.go:157] found existing configuration files:
	
	I1026 02:07:14.869658   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:07:14.879555   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:07:14.879618   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:07:14.888760   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:07:14.897408   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:07:14.897465   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:07:14.906440   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:07:14.915099   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:07:14.915154   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:07:14.924509   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:07:14.933049   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:07:14.933105   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:07:14.941731   61346 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:07:15.087537   61346 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:11:16.436298   61346 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1026 02:11:16.436424   61346 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 02:11:16.439096   61346 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:11:16.439206   61346 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:11:16.439337   61346 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:11:16.439474   61346 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:11:16.439610   61346 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:11:16.439736   61346 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:11:16.441603   61346 out.go:235]   - Generating certificates and keys ...
	I1026 02:11:16.441687   61346 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:11:16.441743   61346 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:11:16.441823   61346 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:11:16.441896   61346 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:11:16.441986   61346 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:11:16.442065   61346 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:11:16.442150   61346 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:11:16.442235   61346 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:11:16.442358   61346 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:11:16.442472   61346 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:11:16.442535   61346 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:11:16.442603   61346 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:11:16.442677   61346 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:11:16.442765   61346 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:11:16.442873   61346 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:11:16.442969   61346 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:11:16.443047   61346 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:11:16.443144   61346 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:11:16.443235   61346 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:11:16.444711   61346 out.go:235]   - Booting up control plane ...
	I1026 02:11:16.444795   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:11:16.444874   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:11:16.445040   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:11:16.445182   61346 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:11:16.445308   61346 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:11:16.445370   61346 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:11:16.445545   61346 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:11:16.445674   61346 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:11:16.445733   61346 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.979558ms
	I1026 02:11:16.445809   61346 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:11:16.445901   61346 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.001297562s
	I1026 02:11:16.445911   61346 kubeadm.go:310] 
	I1026 02:11:16.445966   61346 kubeadm.go:310] Unfortunately, an error has occurred:
	I1026 02:11:16.445997   61346 kubeadm.go:310] 	context deadline exceeded
	I1026 02:11:16.446003   61346 kubeadm.go:310] 
	I1026 02:11:16.446031   61346 kubeadm.go:310] This error is likely caused by:
	I1026 02:11:16.446063   61346 kubeadm.go:310] 	- The kubelet is not running
	I1026 02:11:16.446175   61346 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:11:16.446187   61346 kubeadm.go:310] 
	I1026 02:11:16.446332   61346 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:11:16.446370   61346 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1026 02:11:16.446396   61346 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1026 02:11:16.446402   61346 kubeadm.go:310] 
	I1026 02:11:16.446533   61346 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:11:16.446610   61346 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:11:16.446697   61346 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1026 02:11:16.446792   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:11:16.446862   61346 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1026 02:11:16.446972   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:11:16.447020   61346 kubeadm.go:394] duration metric: took 12m9.243108785s to StartCluster
	I1026 02:11:16.447071   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:11:16.447131   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:11:16.490959   61346 cri.go:89] found id: "44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54"
	I1026 02:11:16.490985   61346 cri.go:89] found id: ""
	I1026 02:11:16.490995   61346 logs.go:282] 1 containers: [44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54]
	I1026 02:11:16.491056   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.495086   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:11:16.495155   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:11:16.534664   61346 cri.go:89] found id: ""
	I1026 02:11:16.534693   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.534700   61346 logs.go:284] No container was found matching "etcd"
	I1026 02:11:16.534714   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:11:16.534770   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:11:16.570066   61346 cri.go:89] found id: ""
	I1026 02:11:16.570091   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.570099   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:11:16.570104   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:11:16.570157   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:11:16.604894   61346 cri.go:89] found id: "6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8"
	I1026 02:11:16.604920   61346 cri.go:89] found id: ""
	I1026 02:11:16.604927   61346 logs.go:282] 1 containers: [6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8]
	I1026 02:11:16.604983   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.608961   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:11:16.609015   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:11:16.646246   61346 cri.go:89] found id: ""
	I1026 02:11:16.646277   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.646285   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:11:16.646291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:11:16.646339   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:11:16.678827   61346 cri.go:89] found id: "a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c"
	I1026 02:11:16.678851   61346 cri.go:89] found id: ""
	I1026 02:11:16.678860   61346 logs.go:282] 1 containers: [a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c]
	I1026 02:11:16.678903   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.682389   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:11:16.682439   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:11:16.713640   61346 cri.go:89] found id: ""
	I1026 02:11:16.713664   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.713672   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:11:16.713677   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:11:16.713721   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:11:16.750715   61346 cri.go:89] found id: ""
	I1026 02:11:16.750737   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.750745   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:11:16.750754   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:11:16.750765   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:11:16.883624   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:11:16.883659   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:11:16.897426   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:11:16.897459   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:11:16.975339   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:11:16.975367   61346 logs.go:123] Gathering logs for kube-apiserver [44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54] ...
	I1026 02:11:16.975382   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54"
	I1026 02:11:17.011746   61346 logs.go:123] Gathering logs for kube-scheduler [6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8] ...
	I1026 02:11:17.011776   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8"
	I1026 02:11:17.091235   61346 logs.go:123] Gathering logs for kube-controller-manager [a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c] ...
	I1026 02:11:17.091279   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c"
	I1026 02:11:17.125678   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:11:17.125710   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:11:17.350817   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:11:17.350852   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1026 02:11:17.395054   61346 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 02:11:17.395128   61346 out.go:270] * 
	* 
	W1026 02:11:17.395194   61346 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:11:17.395218   61346 out.go:270] * 
	* 
	W1026 02:11:17.396049   61346 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 02:11:17.398833   61346 out.go:201] 
	W1026 02:11:17.399780   61346 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:11:17.399821   61346 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 02:11:17.399850   61346 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 02:11:17.401913   61346 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-970804 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-26 02:11:17.756173294 +0000 UTC m=+5292.523936478
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-970804 -n kubernetes-upgrade-970804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-970804 -n kubernetes-upgrade-970804: exit status 2 (224.835226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-970804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-970804 logs -n 25: (2.127376769s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-300387                              | stopped-upgrade-300387    | jenkins | v1.34.0 | 26 Oct 24 01:55 UTC | 26 Oct 24 01:55 UTC |
	| start   | -p no-preload-093148                                   | no-preload-093148         | jenkins | v1.34.0 | 26 Oct 24 01:55 UTC | 26 Oct 24 01:56 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| start   | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:55 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| pause   | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| unpause | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| pause   | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| delete  | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| delete  | -p pause-226333                                        | pause-226333              | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480        | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148         | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148         | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480        | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480        | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804 | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804 | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716    | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148         | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148         | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480        | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480        | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716    | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716    | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716    | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:00:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:00:39.177522   62745 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:00:39.177661   62745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:00:39.177673   62745 out.go:358] Setting ErrFile to fd 2...
	I1026 02:00:39.177680   62745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:00:39.177953   62745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:00:39.178950   62745 out.go:352] Setting JSON to false
	I1026 02:00:39.180293   62745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6179,"bootTime":1729901860,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:00:39.180391   62745 start.go:139] virtualization: kvm guest
	I1026 02:00:39.182493   62745 out.go:177] * [old-k8s-version-385716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:00:39.183770   62745 notify.go:220] Checking for updates...
	I1026 02:00:39.183773   62745 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:00:39.185074   62745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:00:39.186438   62745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:00:39.187667   62745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:00:39.188764   62745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:00:39.189932   62745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:00:39.191412   62745 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 02:00:39.191785   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:00:39.191842   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:00:39.207286   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I1026 02:00:39.207606   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:00:39.208098   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:00:39.208121   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:00:39.208420   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:00:39.208554   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:00:39.210168   62745 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1026 02:00:39.211253   62745 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:00:39.211530   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:00:39.211570   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:00:39.225940   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I1026 02:00:39.226306   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:00:39.226696   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:00:39.226716   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:00:39.227027   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:00:39.227175   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:00:39.262038   62745 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:00:39.263246   62745 start.go:297] selected driver: kvm2
	I1026 02:00:39.263262   62745 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:00:39.263361   62745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:00:39.264013   62745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:00:39.264089   62745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:00:39.278956   62745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:00:39.279371   62745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:00:39.279401   62745 cni.go:84] Creating CNI manager for ""
	I1026 02:00:39.279448   62745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:00:39.279481   62745 start.go:340] cluster config:
	{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:00:39.279589   62745 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:00:39.282054   62745 out.go:177] * Starting "old-k8s-version-385716" primary control-plane node in "old-k8s-version-385716" cluster
	I1026 02:00:40.066574   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:40.067217   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:40.067261   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:40.067308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:40.101017   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:40.101041   61346 cri.go:89] found id: ""
	I1026 02:00:40.101048   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:40.101092   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.104707   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:40.104759   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:40.142358   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:40.142378   61346 cri.go:89] found id: ""
	I1026 02:00:40.142385   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:40.142431   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.146203   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:40.146252   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:40.177579   61346 cri.go:89] found id: ""
	I1026 02:00:40.177609   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.177621   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:40.177628   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:40.177684   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:40.211421   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:40.211443   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:40.211447   61346 cri.go:89] found id: ""
	I1026 02:00:40.211455   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:40.211515   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.215568   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.219045   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:40.219099   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:40.256172   61346 cri.go:89] found id: ""
	I1026 02:00:40.256204   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.256214   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:40.256222   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:40.256284   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:40.293701   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:40.293727   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:40.293733   61346 cri.go:89] found id: ""
	I1026 02:00:40.293742   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:40.293796   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.297882   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:40.301368   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:40.301438   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:40.332642   61346 cri.go:89] found id: ""
	I1026 02:00:40.332670   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.332678   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:40.332683   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:40.332732   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:40.362166   61346 cri.go:89] found id: ""
	I1026 02:00:40.362197   61346 logs.go:282] 0 containers: []
	W1026 02:00:40.362208   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:40.362219   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:40.362236   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:40.420941   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:40.420978   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:40.455143   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:40.455167   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:40.557488   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:40.557525   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:40.571349   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:40.571420   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:40.636014   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:40.636042   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:40.636057   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:40.674054   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:40.674083   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:40.713408   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:40.713450   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:40.755851   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:40.755881   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:40.789022   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:40.789054   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:40.822310   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:40.822337   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:39.283177   62745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 02:00:39.283204   62745 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1026 02:00:39.283219   62745 cache.go:56] Caching tarball of preloaded images
	I1026 02:00:39.283326   62745 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:00:39.283340   62745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1026 02:00:39.283432   62745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 02:00:39.283602   62745 start.go:360] acquireMachinesLock for old-k8s-version-385716: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:00:40.301635   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:00:43.373680   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:00:43.553807   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:43.554393   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:00:43.554448   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:43.554490   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:43.592938   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:43.592959   61346 cri.go:89] found id: ""
	I1026 02:00:43.592966   61346 logs.go:282] 1 containers: [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:43.593024   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.596864   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:43.596927   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:43.629090   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:43.629114   61346 cri.go:89] found id: ""
	I1026 02:00:43.629124   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:43.629171   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.633092   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:43.633148   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:43.669478   61346 cri.go:89] found id: ""
	I1026 02:00:43.669504   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.669512   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:43.669517   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:43.669572   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:43.710108   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:43.710129   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:43.710134   61346 cri.go:89] found id: ""
	I1026 02:00:43.710140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:43.710192   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.714407   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.718062   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:43.718116   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:43.755191   61346 cri.go:89] found id: ""
	I1026 02:00:43.755217   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.755225   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:43.755231   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:43.755321   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:43.793558   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:43.793584   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:43.793590   61346 cri.go:89] found id: ""
	I1026 02:00:43.793597   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:43.793647   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.797642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:43.801080   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:43.801140   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:43.833471   61346 cri.go:89] found id: ""
	I1026 02:00:43.833500   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.833508   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:43.833513   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:43.833563   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:43.864528   61346 cri.go:89] found id: ""
	I1026 02:00:43.864556   61346 logs.go:282] 0 containers: []
	W1026 02:00:43.864563   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:43.864571   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:43.864583   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:43.962636   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:00:43.962669   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:44.000853   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:44.000882   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:44.039677   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:44.039707   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:44.076095   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:00:44.076122   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:44.108731   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:44.108757   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:44.341157   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:00:44.341192   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:00:44.355030   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:44.355056   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:00:44.418910   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:00:44.418934   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:00:44.418952   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:44.477304   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:00:44.477338   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:44.511654   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:44.511684   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:47.048002   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:00:52.048804   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:00:52.048868   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:00:52.048920   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:00:52.083594   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:00:52.083616   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:00:52.083621   61346 cri.go:89] found id: ""
	I1026 02:00:52.083628   61346 logs.go:282] 2 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:00:52.083686   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.087792   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.091654   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:00:52.091722   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:00:52.125866   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:52.125893   61346 cri.go:89] found id: ""
	I1026 02:00:52.125900   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:00:52.125944   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.129585   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:00:52.129652   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:00:52.164516   61346 cri.go:89] found id: ""
	I1026 02:00:52.164539   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.164546   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:00:52.164552   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:00:52.164608   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:00:52.197457   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:00:52.197477   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:00:52.197481   61346 cri.go:89] found id: ""
	I1026 02:00:52.197488   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:00:52.197548   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.201279   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.204927   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:00:52.205001   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:00:52.239482   61346 cri.go:89] found id: ""
	I1026 02:00:52.239510   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.239520   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:00:52.239530   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:00:52.239595   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:00:52.277202   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:52.277225   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:00:52.277230   61346 cri.go:89] found id: ""
	I1026 02:00:52.277239   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:00:52.277299   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.281171   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:00:52.284923   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:00:52.284989   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:00:52.322882   61346 cri.go:89] found id: ""
	I1026 02:00:52.322912   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.322920   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:00:52.322925   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:00:52.322983   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:00:52.355215   61346 cri.go:89] found id: ""
	I1026 02:00:52.355240   61346 logs.go:282] 0 containers: []
	W1026 02:00:52.355252   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:00:52.355260   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:00:52.355271   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:00:52.393632   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:00:52.393668   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:00:52.428737   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:00:52.428768   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:00:52.679756   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:00:52.679801   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:00:52.723243   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:00:52.723274   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:00:52.824393   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:00:52.824432   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:00:49.453636   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:00:52.525673   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:00:58.605645   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:01.677658   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:02.893223   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.068760889s)
	W1026 02:01:02.893273   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 02:01:02.893284   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:02.893304   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:02.935384   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:02.935419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:02.968519   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:01:02.968546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:03.001893   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:03.001929   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:03.015251   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:01:03.015284   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:01:03.052713   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:03.052746   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:05.613617   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:07.096121   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": read tcp 192.168.72.1:52856->192.168.72.48:8443: read: connection reset by peer
	I1026 02:01:07.096182   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:07.096236   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:07.142098   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:07.142125   61346 cri.go:89] found id: "868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	I1026 02:01:07.142131   61346 cri.go:89] found id: ""
	I1026 02:01:07.142140   61346 logs.go:282] 2 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]
	I1026 02:01:07.142192   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.146063   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.149342   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:07.149390   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:07.180732   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:07.180757   61346 cri.go:89] found id: ""
	I1026 02:01:07.180765   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:07.180807   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.184449   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:07.184499   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:07.218220   61346 cri.go:89] found id: ""
	I1026 02:01:07.218244   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.218254   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:07.218262   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:07.218320   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:07.251857   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:07.251879   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:07.251884   61346 cri.go:89] found id: ""
	I1026 02:01:07.251892   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:07.251952   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.255585   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.258900   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:07.258948   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:07.290771   61346 cri.go:89] found id: ""
	I1026 02:01:07.290798   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.290808   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:07.290815   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:07.290874   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:07.322625   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:07.322650   61346 cri.go:89] found id: "5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:07.322657   61346 cri.go:89] found id: ""
	I1026 02:01:07.322666   61346 logs.go:282] 2 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3]
	I1026 02:01:07.322734   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.326314   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:07.329628   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:07.329686   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:07.359973   61346 cri.go:89] found id: ""
	I1026 02:01:07.360000   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.360010   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:07.360017   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:07.360072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:07.391127   61346 cri.go:89] found id: ""
	I1026 02:01:07.391155   61346 logs.go:282] 0 containers: []
	W1026 02:01:07.391162   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:07.391170   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:07.391181   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:07.451181   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:07.451219   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:07.487589   61346 logs.go:123] Gathering logs for kube-controller-manager [5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3] ...
	I1026 02:01:07.487630   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c59f1f3280951a4687c5cc16abbbd40e2eec88f07d747eb0d425ce10cc478f3"
	I1026 02:01:07.520086   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:07.520114   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:07.822924   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:07.822962   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:07.757732   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:07.922602   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:07.922640   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:07.991912   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:07.991945   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:07.991961   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:08.029392   61346 logs.go:123] Gathering logs for kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343] ...
	I1026 02:01:08.029431   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	W1026 02:01:08.060876   61346 logs.go:130] failed kube-apiserver [868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343": Process exited with status 1
	stdout:
	
	stderr:
	E1026 02:01:08.052546    3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist" containerID="868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	time="2024-10-26T02:01:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 02:01:08.052546    3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist" containerID="868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343"
	time="2024-10-26T02:01:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343\": container with ID starting with 868a0cdb11df7ebe9edbd0afde6818e5b977bb8a617127c69c5979e208250343 not found: ID does not exist"
	
	** /stderr **
	I1026 02:01:08.060899   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:08.060914   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:08.074181   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:08.074208   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:08.124123   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:08.124152   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:08.157059   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:08.157089   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:10.694623   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:10.695245   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:10.695296   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:10.695355   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:10.731272   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:10.731293   61346 cri.go:89] found id: ""
	I1026 02:01:10.731301   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:10.731357   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.735380   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:10.735440   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:10.772386   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:10.772406   61346 cri.go:89] found id: ""
	I1026 02:01:10.772413   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:10.772464   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.776121   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:10.776174   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:10.815627   61346 cri.go:89] found id: ""
	I1026 02:01:10.815659   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.815670   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:10.815677   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:10.815743   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:10.848752   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:10.848782   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:10.848788   61346 cri.go:89] found id: ""
	I1026 02:01:10.848797   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:10.848854   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.852529   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.856053   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:10.856107   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:10.888501   61346 cri.go:89] found id: ""
	I1026 02:01:10.888530   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.888538   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:10.888544   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:10.888598   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:10.921137   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:10.921162   61346 cri.go:89] found id: ""
	I1026 02:01:10.921171   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:10.921218   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:10.924867   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:10.924921   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:10.957320   61346 cri.go:89] found id: ""
	I1026 02:01:10.957348   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.957356   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:10.957362   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:10.957430   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:10.990595   61346 cri.go:89] found id: ""
	I1026 02:01:10.990640   61346 logs.go:282] 0 containers: []
	W1026 02:01:10.990649   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:10.990661   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:10.990673   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:11.023482   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:11.023516   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:11.126657   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:11.126696   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:11.140676   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:11.140700   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:11.207177   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:11.207201   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:11.207217   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:11.248581   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:11.248611   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:11.285849   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:11.285874   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:11.321506   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:11.321535   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:11.385340   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:11.385374   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:11.418089   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:11.418115   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:10.829705   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:14.174396   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:14.175123   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:14.175180   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:14.175231   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:14.207390   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:14.207415   61346 cri.go:89] found id: ""
	I1026 02:01:14.207426   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:14.207485   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.211295   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:14.211361   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:14.243130   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:14.243152   61346 cri.go:89] found id: ""
	I1026 02:01:14.243159   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:14.243202   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.246874   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:14.246937   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:14.279016   61346 cri.go:89] found id: ""
	I1026 02:01:14.279042   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.279050   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:14.279055   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:14.279107   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:14.310828   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:14.310854   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:14.310858   61346 cri.go:89] found id: ""
	I1026 02:01:14.310865   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:14.310909   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.314565   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.318093   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:14.318149   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:14.349167   61346 cri.go:89] found id: ""
	I1026 02:01:14.349188   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.349196   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:14.349201   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:14.349249   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:14.381183   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:14.381204   61346 cri.go:89] found id: ""
	I1026 02:01:14.381211   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:14.381255   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:14.384990   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:14.385052   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:14.417426   61346 cri.go:89] found id: ""
	I1026 02:01:14.417453   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.417460   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:14.417466   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:14.417522   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:14.451910   61346 cri.go:89] found id: ""
	I1026 02:01:14.451936   61346 logs.go:282] 0 containers: []
	W1026 02:01:14.451943   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:14.451957   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:14.451974   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:14.485936   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:14.485964   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:14.526045   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:14.526070   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:14.590281   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:14.590314   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:14.623568   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:14.623593   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:14.655446   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:14.655474   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:14.893767   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:14.893809   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:14.995834   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:14.995872   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:15.009129   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:15.009156   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:15.070352   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:15.070379   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:15.070396   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:17.611835   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:17.612559   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:17.612623   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:17.612674   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:17.646568   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:17.646587   61346 cri.go:89] found id: ""
	I1026 02:01:17.646595   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:17.646642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.650559   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:17.650630   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:17.685397   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:17.685436   61346 cri.go:89] found id: ""
	I1026 02:01:17.685444   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:17.685490   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.689098   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:17.689152   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:17.720990   61346 cri.go:89] found id: ""
	I1026 02:01:17.721014   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.721021   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:17.721027   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:17.721075   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:17.751951   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:17.751974   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:17.751977   61346 cri.go:89] found id: ""
	I1026 02:01:17.751984   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:17.752028   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.755480   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.758823   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:17.758887   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:17.789928   61346 cri.go:89] found id: ""
	I1026 02:01:17.789962   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.789972   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:17.789979   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:17.790039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:17.822040   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:17.822067   61346 cri.go:89] found id: ""
	I1026 02:01:17.822077   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:17.822122   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:17.825667   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:17.825737   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:16.909684   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:17.858203   61346 cri.go:89] found id: ""
	I1026 02:01:17.858231   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.858241   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:17.858248   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:17.858308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:17.890054   61346 cri.go:89] found id: ""
	I1026 02:01:17.890086   61346 logs.go:282] 0 containers: []
	W1026 02:01:17.890095   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:17.890114   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:17.890130   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:17.952564   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:17.952614   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:18.193053   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:18.193090   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:18.206465   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:18.206493   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:18.267097   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:18.267125   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:18.267139   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:18.306389   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:18.306415   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:18.337145   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:18.337174   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:18.372122   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:18.372153   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:18.475552   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:18.475588   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:18.511441   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:18.511470   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.044536   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:21.045143   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:21.045196   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:21.045250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:21.088102   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:21.088129   61346 cri.go:89] found id: ""
	I1026 02:01:21.088139   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:21.088209   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.091854   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:21.091924   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:21.124836   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:21.124859   61346 cri.go:89] found id: ""
	I1026 02:01:21.124867   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:21.124923   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.128631   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:21.128694   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:21.161231   61346 cri.go:89] found id: ""
	I1026 02:01:21.161256   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.161264   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:21.161269   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:21.161317   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:21.197288   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:21.197316   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.197320   61346 cri.go:89] found id: ""
	I1026 02:01:21.197327   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:21.197376   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.201028   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.204408   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:21.204457   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:21.237679   61346 cri.go:89] found id: ""
	I1026 02:01:21.237706   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.237717   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:21.237724   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:21.237789   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:21.269050   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:21.269074   61346 cri.go:89] found id: ""
	I1026 02:01:21.269081   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:21.269132   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:21.272724   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:21.272783   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:21.305027   61346 cri.go:89] found id: ""
	I1026 02:01:21.305052   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.305063   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:21.305071   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:21.305135   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:21.340621   61346 cri.go:89] found id: ""
	I1026 02:01:21.340653   61346 logs.go:282] 0 containers: []
	W1026 02:01:21.340663   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:21.340678   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:21.340692   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:21.378423   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:21.378454   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:21.412443   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:21.412471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:21.509369   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:21.509407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:21.572931   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:21.572963   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:21.572982   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:21.612893   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:21.612921   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:21.832618   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:21.832676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:21.868234   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:21.868266   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:21.880578   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:21.880603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:21.948394   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:21.948426   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:19.981732   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:24.481168   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:24.481766   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:24.481817   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:24.481870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:24.516276   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:24.516301   61346 cri.go:89] found id: ""
	I1026 02:01:24.516309   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:24.516371   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.520160   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:24.520226   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:24.552991   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:24.553020   61346 cri.go:89] found id: ""
	I1026 02:01:24.553030   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:24.553090   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.556648   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:24.556707   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:24.592788   61346 cri.go:89] found id: ""
	I1026 02:01:24.592814   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.592823   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:24.592828   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:24.592877   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:24.625184   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:24.625215   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:24.625221   61346 cri.go:89] found id: ""
	I1026 02:01:24.625230   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:24.625287   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.628925   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.632271   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:24.632317   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:24.662915   61346 cri.go:89] found id: ""
	I1026 02:01:24.662945   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.662955   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:24.662963   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:24.663022   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:24.695636   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:24.695670   61346 cri.go:89] found id: ""
	I1026 02:01:24.695678   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:24.695736   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:24.699361   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:24.699421   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:24.735746   61346 cri.go:89] found id: ""
	I1026 02:01:24.735775   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.735785   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:24.735792   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:24.735842   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:24.767245   61346 cri.go:89] found id: ""
	I1026 02:01:24.767272   61346 logs.go:282] 0 containers: []
	W1026 02:01:24.767280   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:24.767293   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:24.767305   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:24.831995   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:24.832021   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:24.832036   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:24.868647   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:24.868678   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:25.087247   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:25.087285   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:25.100575   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:25.100605   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:25.140826   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:25.140856   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:25.205409   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:25.205447   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:25.238529   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:25.238553   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:25.271413   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:25.271442   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:25.308405   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:25.308434   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:26.061654   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:29.133681   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:27.909036   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:27.909608   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:27.909656   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:27.909700   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:27.943040   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:27.943060   61346 cri.go:89] found id: ""
	I1026 02:01:27.943067   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:27.943124   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:27.946739   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:27.946800   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:27.978767   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:27.978797   61346 cri.go:89] found id: ""
	I1026 02:01:27.978806   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:27.978855   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:27.982503   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:27.982561   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:28.015044   61346 cri.go:89] found id: ""
	I1026 02:01:28.015072   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.015083   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:28.015090   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:28.015149   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:28.046707   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:28.046730   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:28.046734   61346 cri.go:89] found id: ""
	I1026 02:01:28.046742   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:28.046792   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.050468   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.053813   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:28.053877   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:28.085803   61346 cri.go:89] found id: ""
	I1026 02:01:28.085826   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.085833   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:28.085838   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:28.085902   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:28.120410   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:28.120436   61346 cri.go:89] found id: ""
	I1026 02:01:28.120444   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:28.120489   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:28.124294   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:28.124370   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:28.161259   61346 cri.go:89] found id: ""
	I1026 02:01:28.161285   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.161293   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:28.161298   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:28.161350   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:28.192909   61346 cri.go:89] found id: ""
	I1026 02:01:28.192940   61346 logs.go:282] 0 containers: []
	W1026 02:01:28.192950   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:28.192967   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:28.192982   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:28.205380   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:28.205402   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:28.241602   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:28.241629   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:28.278331   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:28.278360   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:28.345248   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:28.345285   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:28.377443   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:28.377471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:28.419594   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:28.419621   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:28.517317   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:28.517353   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:28.578149   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:28.578172   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:28.578184   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:28.613440   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:28.613468   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:31.342919   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:31.343483   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:31.343531   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:31.343577   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:31.378101   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:31.378120   61346 cri.go:89] found id: ""
	I1026 02:01:31.378127   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:31.378172   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.381743   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:31.381817   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:31.412310   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:31.412333   61346 cri.go:89] found id: ""
	I1026 02:01:31.412340   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:31.412388   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.416091   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:31.416145   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:31.446692   61346 cri.go:89] found id: ""
	I1026 02:01:31.446719   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.446729   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:31.446736   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:31.446798   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:31.480115   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:31.480136   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:31.480142   61346 cri.go:89] found id: ""
	I1026 02:01:31.480150   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:31.480266   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.483932   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.487444   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:31.487511   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:31.520531   61346 cri.go:89] found id: ""
	I1026 02:01:31.520564   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.520576   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:31.520583   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:31.520636   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:31.557479   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:31.557507   61346 cri.go:89] found id: ""
	I1026 02:01:31.557516   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:31.557572   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:31.561176   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:31.561239   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:31.597815   61346 cri.go:89] found id: ""
	I1026 02:01:31.597837   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.597844   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:31.597850   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:31.597911   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:31.631625   61346 cri.go:89] found id: ""
	I1026 02:01:31.631652   61346 logs.go:282] 0 containers: []
	W1026 02:01:31.631661   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:31.631671   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:31.631688   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:31.666058   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:31.666084   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:31.896870   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:31.896913   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:31.938222   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:31.938254   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:31.950983   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:31.951007   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:31.991748   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:31.991780   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:32.027912   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:32.027939   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:32.091564   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:32.091599   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:32.124532   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:32.124564   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:32.222001   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:32.222041   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:32.286134   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:34.787080   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:34.787656   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:34.787709   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:34.787757   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:34.820961   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:34.820980   61346 cri.go:89] found id: ""
	I1026 02:01:34.820987   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:34.821033   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.824625   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:34.824684   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:34.857704   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:34.857734   61346 cri.go:89] found id: ""
	I1026 02:01:34.857745   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:34.857803   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.861462   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:34.861524   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:34.895007   61346 cri.go:89] found id: ""
	I1026 02:01:34.895038   61346 logs.go:282] 0 containers: []
	W1026 02:01:34.895047   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:34.895053   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:34.895101   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:34.926650   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:34.926669   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:34.926673   61346 cri.go:89] found id: ""
	I1026 02:01:34.926679   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:34.926727   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.930412   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:34.933891   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:34.933955   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:34.967170   61346 cri.go:89] found id: ""
	I1026 02:01:34.967199   61346 logs.go:282] 0 containers: []
	W1026 02:01:34.967207   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:34.967214   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:34.967267   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:34.999176   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:34.999197   61346 cri.go:89] found id: ""
	I1026 02:01:34.999204   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:34.999256   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:35.003081   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:35.003140   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:35.034864   61346 cri.go:89] found id: ""
	I1026 02:01:35.034895   61346 logs.go:282] 0 containers: []
	W1026 02:01:35.034904   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:35.034910   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:35.034984   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:35.066649   61346 cri.go:89] found id: ""
	I1026 02:01:35.066679   61346 logs.go:282] 0 containers: []
	W1026 02:01:35.066687   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:35.066700   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:35.066717   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:35.105709   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:35.105737   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:35.346505   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:35.346540   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:35.450362   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:35.450396   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:35.463653   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:35.463678   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:35.526627   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:35.526660   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:35.526676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:35.558724   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:35.558756   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:35.600035   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:35.600061   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:35.635520   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:35.635546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:35.701957   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:35.701997   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:35.213668   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:38.285606   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:38.236696   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:38.237245   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:38.237290   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:38.237332   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:38.274939   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:38.274967   61346 cri.go:89] found id: ""
	I1026 02:01:38.274976   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:38.275026   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.278658   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:38.278714   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:38.311299   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:38.311320   61346 cri.go:89] found id: ""
	I1026 02:01:38.311327   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:38.311380   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.315221   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:38.315278   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:38.346651   61346 cri.go:89] found id: ""
	I1026 02:01:38.346682   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.346692   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:38.346699   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:38.346760   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:38.379260   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:38.379282   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:38.379286   61346 cri.go:89] found id: ""
	I1026 02:01:38.379292   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:38.379336   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.383048   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.386640   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:38.386688   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:38.418119   61346 cri.go:89] found id: ""
	I1026 02:01:38.418143   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.418150   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:38.418156   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:38.418205   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:38.449593   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:38.449617   61346 cri.go:89] found id: ""
	I1026 02:01:38.449624   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:38.449675   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:38.453336   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:38.453393   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:38.485785   61346 cri.go:89] found id: ""
	I1026 02:01:38.485817   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.485828   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:38.485834   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:38.485881   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:38.517273   61346 cri.go:89] found id: ""
	I1026 02:01:38.517298   61346 logs.go:282] 0 containers: []
	W1026 02:01:38.517305   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:38.517316   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:38.517327   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:38.577625   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:38.577647   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:38.577671   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:38.642831   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:38.642865   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:38.675642   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:38.675667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:38.775725   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:38.775759   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:38.789346   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:38.789373   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:38.821294   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:38.821322   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:39.047451   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:39.047488   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:39.085242   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:39.085269   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:39.121161   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:39.121192   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:41.663167   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:41.663756   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:41.663810   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:41.663853   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:41.696060   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:41.696085   61346 cri.go:89] found id: ""
	I1026 02:01:41.696094   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:41.696156   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.699834   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:41.699900   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:41.736393   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:41.736418   61346 cri.go:89] found id: ""
	I1026 02:01:41.736426   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:41.736479   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.740126   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:41.740180   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:41.776330   61346 cri.go:89] found id: ""
	I1026 02:01:41.776355   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.776362   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:41.776367   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:41.776413   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:41.825109   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:41.825130   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:41.825134   61346 cri.go:89] found id: ""
	I1026 02:01:41.825140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:41.825193   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.828957   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.832393   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:41.832443   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:41.868232   61346 cri.go:89] found id: ""
	I1026 02:01:41.868258   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.868265   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:41.868270   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:41.868324   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:41.906489   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:41.906516   61346 cri.go:89] found id: ""
	I1026 02:01:41.906524   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:41.906571   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:41.910417   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:41.910478   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:41.946304   61346 cri.go:89] found id: ""
	I1026 02:01:41.946333   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.946342   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:41.946347   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:41.946414   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:41.983472   61346 cri.go:89] found id: ""
	I1026 02:01:41.983494   61346 logs.go:282] 0 containers: []
	W1026 02:01:41.983501   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:41.983518   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:41.983532   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:42.030375   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:42.030407   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:42.067393   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:42.067419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:42.104374   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:42.104399   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:42.337072   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:42.337109   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:42.442464   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:42.442497   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:42.458447   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:42.458471   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:42.530643   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:42.530664   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:42.530676   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:42.571944   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:42.571972   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:42.645825   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:42.645864   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:45.188832   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:45.189474   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:45.189524   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:45.189574   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:45.221642   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:45.221669   61346 cri.go:89] found id: ""
	I1026 02:01:45.221679   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:45.221740   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.225200   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:45.225250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:45.256641   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:45.256663   61346 cri.go:89] found id: ""
	I1026 02:01:45.256673   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:45.256736   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.260301   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:45.260356   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:45.298468   61346 cri.go:89] found id: ""
	I1026 02:01:45.298490   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.298498   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:45.298503   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:45.298560   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:45.336252   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:45.336273   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:45.336277   61346 cri.go:89] found id: ""
	I1026 02:01:45.336283   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:45.336336   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.340429   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.344395   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:45.344447   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:45.382133   61346 cri.go:89] found id: ""
	I1026 02:01:45.382157   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.382164   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:45.382170   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:45.382218   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:45.423921   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:45.423941   61346 cri.go:89] found id: ""
	I1026 02:01:45.423955   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:45.424001   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:45.427657   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:45.427723   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:45.459454   61346 cri.go:89] found id: ""
	I1026 02:01:45.459477   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.459485   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:45.459491   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:45.459544   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:45.493995   61346 cri.go:89] found id: ""
	I1026 02:01:45.494022   61346 logs.go:282] 0 containers: []
	W1026 02:01:45.494030   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:45.494042   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:45.494053   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:45.558932   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:45.558956   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:45.558968   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:45.600269   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:45.600301   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:45.637631   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:45.637658   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:45.672455   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:45.672478   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:45.898144   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:45.898183   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:46.001553   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:46.001590   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:46.014584   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:46.014612   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:46.050070   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:46.050099   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:46.122012   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:46.122045   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:44.365646   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:47.437654   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:48.654559   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:48.655226   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:01:48.655278   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:48.655333   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:48.687657   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:48.687678   61346 cri.go:89] found id: ""
	I1026 02:01:48.687685   61346 logs.go:282] 1 containers: [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:48.687731   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.691267   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:48.691328   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:48.722176   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:48.722203   61346 cri.go:89] found id: ""
	I1026 02:01:48.722214   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:48.722271   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.726029   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:48.726088   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:48.756765   61346 cri.go:89] found id: ""
	I1026 02:01:48.756789   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.756798   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:48.756805   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:48.756870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:48.789939   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:48.789972   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:48.789976   61346 cri.go:89] found id: ""
	I1026 02:01:48.789983   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:48.790041   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.793855   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.797178   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:48.797250   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:48.828626   61346 cri.go:89] found id: ""
	I1026 02:01:48.828651   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.828658   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:48.828664   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:48.828712   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:48.864962   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:48.864984   61346 cri.go:89] found id: ""
	I1026 02:01:48.865007   61346 logs.go:282] 1 containers: [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:48.865068   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:48.868946   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:48.869021   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:48.903366   61346 cri.go:89] found id: ""
	I1026 02:01:48.903388   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.903396   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:48.903402   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:48.903461   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:48.933488   61346 cri.go:89] found id: ""
	I1026 02:01:48.933521   61346 logs.go:282] 0 containers: []
	W1026 02:01:48.933530   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:48.933543   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:01:48.933555   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:01:48.968710   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:01:48.968744   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:01:49.070033   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:01:49.070064   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:49.112803   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:01:49.112835   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:49.144343   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:01:49.144373   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:01:49.380238   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:01:49.380286   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:49.420714   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:49.420751   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:49.435215   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:49.435244   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:01:49.499051   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:01:49.499074   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:01:49.499087   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:49.535173   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:01:49.535204   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:52.102258   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:01:53.517697   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:01:57.102653   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 02:01:57.102718   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:01:57.102770   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:01:57.137042   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:01:57.137069   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:01:57.137073   61346 cri.go:89] found id: ""
	I1026 02:01:57.137080   61346 logs.go:282] 2 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:01:57.137126   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.140841   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.144367   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:01:57.144418   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:01:57.180851   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:01:57.180889   61346 cri.go:89] found id: ""
	I1026 02:01:57.180896   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:01:57.180939   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.184825   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:01:57.184892   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:01:57.218887   61346 cri.go:89] found id: ""
	I1026 02:01:57.218921   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.218931   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:01:57.218939   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:01:57.219005   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:01:57.250967   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:01:57.250992   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:01:57.250999   61346 cri.go:89] found id: ""
	I1026 02:01:57.251007   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:01:57.251069   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.254949   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.258367   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:01:57.258422   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:01:57.289606   61346 cri.go:89] found id: ""
	I1026 02:01:57.289642   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.289650   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:01:57.289656   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:01:57.289717   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:01:57.321286   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:01:57.321312   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:01:57.321318   61346 cri.go:89] found id: ""
	I1026 02:01:57.321326   61346 logs.go:282] 2 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:01:57.321372   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.325150   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:01:57.328491   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:01:57.328544   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:01:57.359673   61346 cri.go:89] found id: ""
	I1026 02:01:57.359695   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.359702   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:01:57.359707   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:01:57.359761   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:01:57.396815   61346 cri.go:89] found id: ""
	I1026 02:01:57.396842   61346 logs.go:282] 0 containers: []
	W1026 02:01:57.396849   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:01:57.396858   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:01:57.396875   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:01:57.411804   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:01:57.411830   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:01:56.589764   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:02.669686   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:07.483917   61346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.072063845s)
	W1026 02:02:07.483960   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 02:02:07.483975   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:07.483988   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:07.521628   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:07.521658   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:07.552547   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:02:07.552573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:07.591042   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:07.591068   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:07.695732   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:07.695772   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:07.733457   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:07.733486   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:07.802864   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:07.802901   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:05.741723   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:07.835577   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:07.835604   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:08.091616   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:08.091652   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:08.128075   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:02:08.128100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:02:10.663065   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:11.777091   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": read tcp 192.168.72.1:59572->192.168.72.48:8443: read: connection reset by peer
	I1026 02:02:11.777150   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:11.777200   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:11.820458   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:11.820483   61346 cri.go:89] found id: "3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	I1026 02:02:11.820489   61346 cri.go:89] found id: ""
	I1026 02:02:11.820496   61346 logs.go:282] 2 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]
	I1026 02:02:11.820542   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.824677   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.828148   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:11.828213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:11.860806   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:11.860830   61346 cri.go:89] found id: ""
	I1026 02:02:11.860838   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:11.860888   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.864410   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:11.864467   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:11.895785   61346 cri.go:89] found id: ""
	I1026 02:02:11.895810   61346 logs.go:282] 0 containers: []
	W1026 02:02:11.895817   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:11.895823   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:11.895870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:11.931392   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:11.931416   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:11.931421   61346 cri.go:89] found id: ""
	I1026 02:02:11.931427   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:11.931477   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.938408   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:11.941713   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:11.941769   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:11.994793   61346 cri.go:89] found id: ""
	I1026 02:02:11.994822   61346 logs.go:282] 0 containers: []
	W1026 02:02:11.994833   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:11.994840   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:11.994900   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:12.028264   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:12.028286   61346 cri.go:89] found id: "5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:12.028290   61346 cri.go:89] found id: ""
	I1026 02:02:12.028300   61346 logs.go:282] 2 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae]
	I1026 02:02:12.028348   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:12.031897   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:12.035412   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:12.035466   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:12.066681   61346 cri.go:89] found id: ""
	I1026 02:02:12.066708   61346 logs.go:282] 0 containers: []
	W1026 02:02:12.066716   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:12.066722   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:12.066766   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:12.099138   61346 cri.go:89] found id: ""
	I1026 02:02:12.099161   61346 logs.go:282] 0 containers: []
	W1026 02:02:12.099168   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:12.099176   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:12.099189   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:12.133743   61346 logs.go:123] Gathering logs for kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68] ...
	I1026 02:02:12.133769   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	W1026 02:02:12.165695   61346 logs.go:130] failed kube-apiserver [3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68": Process exited with status 1
	stdout:
	
	stderr:
	E1026 02:02:12.158231    5086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist" containerID="3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	time="2024-10-26T02:02:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 02:02:12.158231    5086 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist" containerID="3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68"
	time="2024-10-26T02:02:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68\": container with ID starting with 3b7e13b5d8b29ebb3d5a6fe5c8f5d77f9418064de80d58067b07fe6b90c28c68 not found: ID does not exist"
	
	** /stderr **
	I1026 02:02:12.165733   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:12.165751   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:12.198406   61346 logs.go:123] Gathering logs for kube-controller-manager [5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae] ...
	I1026 02:02:12.198435   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3ea2f96ee806a935b512743ee694cad8a8aac6914a074642073fb3f12d65ae"
	I1026 02:02:12.230831   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:12.230859   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:12.508892   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:12.508931   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:12.552240   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:12.552266   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:12.653540   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:12.653583   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:12.668700   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:12.668725   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:12.738451   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:12.738474   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:12.738486   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:12.787014   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:12.787042   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:11.821625   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:12.858583   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:12.858617   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:15.403465   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:15.404114   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:15.404171   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:15.404221   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:15.440283   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:15.440304   61346 cri.go:89] found id: ""
	I1026 02:02:15.440311   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:15.440358   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.444163   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:15.444207   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:15.482062   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:15.482087   61346 cri.go:89] found id: ""
	I1026 02:02:15.482097   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:15.482144   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.485868   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:15.485917   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:15.518077   61346 cri.go:89] found id: ""
	I1026 02:02:15.518105   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.518114   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:15.518122   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:15.518188   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:15.551232   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:15.551254   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:15.551260   61346 cri.go:89] found id: ""
	I1026 02:02:15.551267   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:15.551324   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.554964   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.558439   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:15.558489   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:15.595053   61346 cri.go:89] found id: ""
	I1026 02:02:15.595075   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.595083   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:15.595088   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:15.595133   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:15.627051   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:15.627072   61346 cri.go:89] found id: ""
	I1026 02:02:15.627081   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:15.627143   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:15.630841   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:15.630899   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:15.662237   61346 cri.go:89] found id: ""
	I1026 02:02:15.662263   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.662270   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:15.662276   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:15.662322   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:15.694582   61346 cri.go:89] found id: ""
	I1026 02:02:15.694607   61346 logs.go:282] 0 containers: []
	W1026 02:02:15.694614   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:15.694632   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:15.694643   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:15.795538   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:15.795575   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:15.856869   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:15.856897   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:15.856909   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:15.896982   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:15.897012   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:15.930053   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:15.930080   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:16.205663   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:16.205705   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:16.242284   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:16.242311   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:16.255367   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:16.255394   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:16.291142   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:16.291170   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:16.360224   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:16.360257   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:14.893690   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:18.895015   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:18.895672   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:18.895716   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:18.895765   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:18.929029   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:18.929057   61346 cri.go:89] found id: ""
	I1026 02:02:18.929071   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:18.929129   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:18.932722   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:18.932779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:18.964370   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:18.964393   61346 cri.go:89] found id: ""
	I1026 02:02:18.964402   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:18.964466   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:18.968062   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:18.968129   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:19.001916   61346 cri.go:89] found id: ""
	I1026 02:02:19.001943   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.001950   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:19.001956   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:19.002002   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:19.033576   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:19.033602   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:19.033607   61346 cri.go:89] found id: ""
	I1026 02:02:19.033614   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:19.033674   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.037391   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.040838   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:19.040901   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:19.073540   61346 cri.go:89] found id: ""
	I1026 02:02:19.073565   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.073572   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:19.073577   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:19.073622   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:19.108089   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:19.108114   61346 cri.go:89] found id: ""
	I1026 02:02:19.108123   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:19.108167   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:19.111887   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:19.111946   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:19.146400   61346 cri.go:89] found id: ""
	I1026 02:02:19.146432   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.146442   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:19.146450   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:19.146504   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:19.179780   61346 cri.go:89] found id: ""
	I1026 02:02:19.179811   61346 logs.go:282] 0 containers: []
	W1026 02:02:19.179822   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:19.179840   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:19.179856   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:19.213669   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:19.213701   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:19.250015   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:19.250042   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:19.354985   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:19.355016   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:19.439524   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:19.439557   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:19.475428   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:19.475455   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:19.516451   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:19.516480   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:19.749926   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:19.749968   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:19.791625   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:19.791657   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:19.805157   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:19.805186   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:19.868578   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:22.369637   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:22.370240   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:22.370288   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:22.370343   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:22.403651   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:22.403680   61346 cri.go:89] found id: ""
	I1026 02:02:22.403691   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:22.403759   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.407572   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:22.407644   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:22.438929   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:22.438957   61346 cri.go:89] found id: ""
	I1026 02:02:22.438964   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:22.439016   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.442590   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:22.442642   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:22.476807   61346 cri.go:89] found id: ""
	I1026 02:02:22.476835   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.476843   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:22.476848   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:22.476895   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:22.509688   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:22.509719   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:22.509725   61346 cri.go:89] found id: ""
	I1026 02:02:22.509734   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:22.509793   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.513628   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.517162   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:22.517213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:22.548953   61346 cri.go:89] found id: ""
	I1026 02:02:22.548978   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.548987   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:22.548993   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:22.549049   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:22.582352   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:22.582372   61346 cri.go:89] found id: ""
	I1026 02:02:22.582379   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:22.582425   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:22.586291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:22.586343   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:22.617896   61346 cri.go:89] found id: ""
	I1026 02:02:22.617919   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.617928   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:22.617935   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:22.617997   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:22.649592   61346 cri.go:89] found id: ""
	I1026 02:02:22.649620   61346 logs.go:282] 0 containers: []
	W1026 02:02:22.649636   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:22.649653   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:22.649667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:22.681588   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:22.681615   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:20.973699   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:24.045753   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:22.910716   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:22.910753   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:22.972357   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:22.972383   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:22.972398   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:23.009349   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:23.009376   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:23.046544   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:23.046573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:23.113784   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:23.113819   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:23.218951   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:23.218990   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:23.232688   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:23.232716   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:23.265609   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:23.265634   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:25.809260   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:25.809924   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:25.809978   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:25.810026   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:25.842996   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:25.843018   61346 cri.go:89] found id: ""
	I1026 02:02:25.843026   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:25.843071   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.846813   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:25.846870   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:25.879374   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:25.879395   61346 cri.go:89] found id: ""
	I1026 02:02:25.879403   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:25.879449   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.883367   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:25.883429   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:25.916515   61346 cri.go:89] found id: ""
	I1026 02:02:25.916552   61346 logs.go:282] 0 containers: []
	W1026 02:02:25.916565   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:25.916573   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:25.916638   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:25.949559   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:25.949581   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:25.949586   61346 cri.go:89] found id: ""
	I1026 02:02:25.949592   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:25.949637   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.953333   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:25.956778   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:25.956843   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:25.989764   61346 cri.go:89] found id: ""
	I1026 02:02:25.989788   61346 logs.go:282] 0 containers: []
	W1026 02:02:25.989796   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:25.989802   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:25.989851   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:26.025336   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:26.025356   61346 cri.go:89] found id: ""
	I1026 02:02:26.025365   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:26.025431   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:26.029006   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:26.029067   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:26.061030   61346 cri.go:89] found id: ""
	I1026 02:02:26.061055   61346 logs.go:282] 0 containers: []
	W1026 02:02:26.061062   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:26.061069   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:26.061123   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:26.093721   61346 cri.go:89] found id: ""
	I1026 02:02:26.093745   61346 logs.go:282] 0 containers: []
	W1026 02:02:26.093755   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:26.093768   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:26.093778   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:26.125693   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:26.125717   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:26.161383   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:26.161410   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:26.199392   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:26.199419   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:26.267481   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:26.267513   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:26.328261   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:26.328288   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:26.328300   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:26.361570   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:26.361603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:26.579535   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:26.579573   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:26.619047   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:26.619075   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:26.725765   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:26.725799   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:29.239446   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:29.240070   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:29.240131   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:29.240182   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:29.276196   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:29.276221   61346 cri.go:89] found id: ""
	I1026 02:02:29.276231   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:29.276280   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.280051   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:29.280117   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:29.316260   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:29.316281   61346 cri.go:89] found id: ""
	I1026 02:02:29.316288   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:29.316346   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.320038   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:29.320104   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:29.353542   61346 cri.go:89] found id: ""
	I1026 02:02:29.353572   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.353580   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:29.353586   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:29.353638   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:29.393524   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:29.393544   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:29.393547   61346 cri.go:89] found id: ""
	I1026 02:02:29.393554   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:29.393600   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.397227   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.400632   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:29.400688   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:29.432303   61346 cri.go:89] found id: ""
	I1026 02:02:29.432326   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.432334   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:29.432339   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:29.432395   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:29.465199   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:29.465219   61346 cri.go:89] found id: ""
	I1026 02:02:29.465226   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:29.465272   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:29.469249   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:29.469308   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:29.503144   61346 cri.go:89] found id: ""
	I1026 02:02:29.503170   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.503178   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:29.503184   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:29.503232   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:29.536928   61346 cri.go:89] found id: ""
	I1026 02:02:29.536955   61346 logs.go:282] 0 containers: []
	W1026 02:02:29.536963   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:29.536977   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:29.536991   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:29.599022   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:29.599042   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:29.599055   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:29.668945   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:29.668980   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:29.702721   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:29.702753   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:29.930599   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:29.930648   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:29.973388   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:29.973438   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:30.076853   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:30.076892   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:30.090433   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:30.090458   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:30.125968   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:30.125994   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:30.163546   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:30.163576   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:32.698485   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:32.699097   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:32.699145   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:32.699189   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:32.733585   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:32.733612   61346 cri.go:89] found id: ""
	I1026 02:02:32.733622   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:32.733684   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.737320   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:32.737375   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:32.769567   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:32.769589   61346 cri.go:89] found id: ""
	I1026 02:02:32.769596   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:32.769645   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.773255   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:32.773331   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:32.805727   61346 cri.go:89] found id: ""
	I1026 02:02:32.805756   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.805765   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:32.805777   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:32.805842   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:30.129635   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:33.197654   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:32.839199   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:32.839218   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:32.839222   61346 cri.go:89] found id: ""
	I1026 02:02:32.839229   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:32.839271   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.842886   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.846126   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:32.846182   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:32.878672   61346 cri.go:89] found id: ""
	I1026 02:02:32.878700   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.878710   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:32.878718   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:32.878769   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:32.915524   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:32.915549   61346 cri.go:89] found id: ""
	I1026 02:02:32.915558   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:32.915613   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:32.919431   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:32.919492   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:32.952460   61346 cri.go:89] found id: ""
	I1026 02:02:32.952489   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.952500   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:32.952506   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:32.952551   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:32.985156   61346 cri.go:89] found id: ""
	I1026 02:02:32.985183   61346 logs.go:282] 0 containers: []
	W1026 02:02:32.985191   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:32.985206   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:32.985218   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:33.205658   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:33.205693   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:33.315001   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:33.315038   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:33.382645   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:33.382670   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:33.382682   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:33.454153   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:33.454188   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:33.487804   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:33.487834   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:33.521200   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:33.521236   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:33.534212   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:33.534243   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:33.570941   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:33.570973   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:33.609836   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:33.609868   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:36.151548   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:36.152194   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:36.152241   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:36.152288   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:36.186165   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:36.186190   61346 cri.go:89] found id: ""
	I1026 02:02:36.186198   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:36.186258   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.190006   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:36.190072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:36.221821   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:36.221840   61346 cri.go:89] found id: ""
	I1026 02:02:36.221847   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:36.221903   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.225739   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:36.225798   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:36.257132   61346 cri.go:89] found id: ""
	I1026 02:02:36.257158   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.257165   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:36.257170   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:36.257216   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:36.290728   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:36.290750   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:36.290756   61346 cri.go:89] found id: ""
	I1026 02:02:36.290765   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:36.290824   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.294642   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.298105   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:36.298176   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:36.328680   61346 cri.go:89] found id: ""
	I1026 02:02:36.328706   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.328714   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:36.328719   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:36.328779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:36.360650   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:36.360673   61346 cri.go:89] found id: ""
	I1026 02:02:36.360683   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:36.360740   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:36.364455   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:36.364528   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:36.397049   61346 cri.go:89] found id: ""
	I1026 02:02:36.397080   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.397090   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:36.397098   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:36.397159   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:36.428657   61346 cri.go:89] found id: ""
	I1026 02:02:36.428682   61346 logs.go:282] 0 containers: []
	W1026 02:02:36.428692   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:36.428708   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:36.428722   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:36.655812   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:36.655850   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:36.701057   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:36.701080   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:36.810072   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:36.810110   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:36.851029   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:36.851059   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:36.884147   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:36.884176   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:36.964433   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:36.964467   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:36.997887   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:36.997913   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:37.011320   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:37.011351   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:37.073351   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:37.073372   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:37.073388   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:39.277626   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:39.616125   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:39.616763   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:39.616809   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:39.616859   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:39.650718   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:39.650741   61346 cri.go:89] found id: ""
	I1026 02:02:39.650747   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:39.650803   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.654856   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:39.654918   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:39.687829   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:39.687855   61346 cri.go:89] found id: ""
	I1026 02:02:39.687862   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:39.687916   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.691736   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:39.691813   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:39.725456   61346 cri.go:89] found id: ""
	I1026 02:02:39.725478   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.725486   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:39.725492   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:39.725543   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:39.758138   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:39.758203   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:39.758215   61346 cri.go:89] found id: ""
	I1026 02:02:39.758223   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:39.758288   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.762009   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.765676   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:39.765728   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:39.797015   61346 cri.go:89] found id: ""
	I1026 02:02:39.797046   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.797054   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:39.797060   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:39.797120   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:39.828873   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:39.828899   61346 cri.go:89] found id: ""
	I1026 02:02:39.828908   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:39.828968   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:39.832708   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:39.832761   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:39.865055   61346 cri.go:89] found id: ""
	I1026 02:02:39.865085   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.865095   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:39.865103   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:39.865172   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:39.896749   61346 cri.go:89] found id: ""
	I1026 02:02:39.896776   61346 logs.go:282] 0 containers: []
	W1026 02:02:39.896784   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:39.896795   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:39.896810   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:39.909739   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:39.909769   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:39.974509   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:39.974534   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:39.974546   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:40.011144   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:40.011177   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:40.042751   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:40.042782   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:40.286733   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:40.286777   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:40.395108   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:40.395144   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:40.433276   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:40.433310   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:40.502277   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:40.502316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:40.535877   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:40.535907   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:42.349623   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:43.076004   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:43.076575   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:43.076644   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:43.076703   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:43.110248   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:43.110271   61346 cri.go:89] found id: ""
	I1026 02:02:43.110279   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:43.110324   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.114088   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:43.114143   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:43.148461   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:43.148484   61346 cri.go:89] found id: ""
	I1026 02:02:43.148491   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:43.148536   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.152157   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:43.152214   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:43.183703   61346 cri.go:89] found id: ""
	I1026 02:02:43.183736   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.183746   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:43.183753   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:43.183814   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:43.217197   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:43.217223   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:43.217229   61346 cri.go:89] found id: ""
	I1026 02:02:43.217237   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:43.217300   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.220997   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.224329   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:43.224375   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:43.256892   61346 cri.go:89] found id: ""
	I1026 02:02:43.256921   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.256928   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:43.256934   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:43.256995   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:43.290558   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:43.290610   61346 cri.go:89] found id: ""
	I1026 02:02:43.290621   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:43.290676   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:43.294453   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:43.294528   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:43.326405   61346 cri.go:89] found id: ""
	I1026 02:02:43.326433   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.326440   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:43.326445   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:43.326496   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:43.358535   61346 cri.go:89] found id: ""
	I1026 02:02:43.358567   61346 logs.go:282] 0 containers: []
	W1026 02:02:43.358578   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:43.358595   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:43.358609   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:43.461667   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:43.461704   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:43.500697   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:43.500728   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:43.573581   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:43.573631   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:43.606316   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:43.606343   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:43.645077   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:43.645106   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:43.658729   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:43.658762   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:43.719053   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:43.719083   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:43.719100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:43.755287   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:43.755316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:43.788902   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:43.788933   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:46.511867   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:46.512465   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:46.512510   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:46.512556   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:46.548273   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:46.548297   61346 cri.go:89] found id: ""
	I1026 02:02:46.548304   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:46.548347   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.552088   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:46.552138   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:46.584097   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:46.584119   61346 cri.go:89] found id: ""
	I1026 02:02:46.584127   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:46.584181   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.588008   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:46.588072   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:46.620524   61346 cri.go:89] found id: ""
	I1026 02:02:46.620548   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.620557   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:46.620562   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:46.620618   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:46.658098   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:46.658126   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:46.658132   61346 cri.go:89] found id: ""
	I1026 02:02:46.658140   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:46.658199   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.661881   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.665176   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:46.665225   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:46.696936   61346 cri.go:89] found id: ""
	I1026 02:02:46.696964   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.696971   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:46.696977   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:46.697039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:46.729366   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:46.729389   61346 cri.go:89] found id: ""
	I1026 02:02:46.729396   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:46.729466   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:46.733337   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:46.733467   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:46.764260   61346 cri.go:89] found id: ""
	I1026 02:02:46.764282   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.764290   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:46.764296   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:46.764344   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:46.797523   61346 cri.go:89] found id: ""
	I1026 02:02:46.797548   61346 logs.go:282] 0 containers: []
	W1026 02:02:46.797557   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:46.797567   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:46.797579   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:46.909622   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:46.909659   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:46.974670   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:46.974695   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:46.974709   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:47.018707   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:47.018743   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:47.051128   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:47.051155   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:47.281134   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:47.281179   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:47.295219   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:47.295256   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:47.329525   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:47.329555   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:47.404243   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:47.404280   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:47.440107   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:47.440141   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:48.429703   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:49.978053   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:49.978652   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:49.978704   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:49.978764   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:50.013089   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:50.013119   61346 cri.go:89] found id: ""
	I1026 02:02:50.013129   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:50.013190   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.017006   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:50.017088   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:50.048872   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:50.048897   61346 cri.go:89] found id: ""
	I1026 02:02:50.048906   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:50.048967   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.052557   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:50.052635   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:50.084898   61346 cri.go:89] found id: ""
	I1026 02:02:50.084928   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.084936   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:50.084942   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:50.084989   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:50.116188   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:50.116212   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:50.116218   61346 cri.go:89] found id: ""
	I1026 02:02:50.116226   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:50.116270   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.119872   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.123212   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:50.123275   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:50.160590   61346 cri.go:89] found id: ""
	I1026 02:02:50.160621   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.160632   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:50.160640   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:50.160689   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:50.192979   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:50.192999   61346 cri.go:89] found id: ""
	I1026 02:02:50.193006   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:50.193051   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:50.196593   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:50.196660   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:50.226322   61346 cri.go:89] found id: ""
	I1026 02:02:50.226349   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.226358   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:50.226366   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:50.226416   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:50.257832   61346 cri.go:89] found id: ""
	I1026 02:02:50.257856   61346 logs.go:282] 0 containers: []
	W1026 02:02:50.257863   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:50.257877   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:50.257890   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:50.302398   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:50.302424   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:50.379629   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:50.379667   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:50.415070   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:50.415100   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:50.629087   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:50.629123   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:50.740093   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:50.740133   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:50.757252   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:50.757277   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:50.824893   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:50.824918   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:50.824929   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:50.866786   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:50.866812   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:50.905577   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:50.905603   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:51.501696   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:02:53.443026   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:53.443662   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:53.443713   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:53.443759   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:53.483805   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:53.483824   61346 cri.go:89] found id: ""
	I1026 02:02:53.483831   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:53.483890   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.487896   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:53.487953   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:53.524571   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:53.524597   61346 cri.go:89] found id: ""
	I1026 02:02:53.524605   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:53.524680   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.528250   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:53.528319   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:53.559251   61346 cri.go:89] found id: ""
	I1026 02:02:53.559278   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.559286   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:53.559291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:53.559337   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:53.591011   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:53.591031   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:53.591035   61346 cri.go:89] found id: ""
	I1026 02:02:53.591041   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:53.591087   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.594869   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.598201   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:53.598254   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:53.631253   61346 cri.go:89] found id: ""
	I1026 02:02:53.631278   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.631288   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:53.631295   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:53.631356   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:53.663634   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:53.663657   61346 cri.go:89] found id: ""
	I1026 02:02:53.663668   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:53.663712   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:53.667626   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:53.667681   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:53.698818   61346 cri.go:89] found id: ""
	I1026 02:02:53.698847   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.698854   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:53.698859   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:53.698906   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:53.732094   61346 cri.go:89] found id: ""
	I1026 02:02:53.732122   61346 logs.go:282] 0 containers: []
	W1026 02:02:53.732129   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:53.732141   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:53.732151   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:53.770127   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:53.770155   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:53.881427   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:53.881464   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:53.947809   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:53.947838   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:53.947855   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:53.989091   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:53.989125   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:54.022240   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:54.022268   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:54.054505   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:54.054535   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:54.271043   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:54.271078   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:54.284024   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:54.284049   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:54.323290   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:54.323321   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:56.896246   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:02:56.896816   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:02:56.896862   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:02:56.896910   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:02:56.933652   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:56.933672   61346 cri.go:89] found id: ""
	I1026 02:02:56.933679   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:02:56.933729   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:56.937440   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:02:56.937481   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:02:56.968265   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:56.968291   61346 cri.go:89] found id: ""
	I1026 02:02:56.968301   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:02:56.968355   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:56.971944   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:02:56.972014   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:02:57.003503   61346 cri.go:89] found id: ""
	I1026 02:02:57.003534   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.003552   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:02:57.003559   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:02:57.003612   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:02:57.034482   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:57.034507   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:57.034513   61346 cri.go:89] found id: ""
	I1026 02:02:57.034521   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:02:57.034576   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.038273   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.041648   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:02:57.041704   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:02:57.075834   61346 cri.go:89] found id: ""
	I1026 02:02:57.075862   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.075880   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:02:57.075886   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:02:57.075938   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:02:57.109328   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:57.109351   61346 cri.go:89] found id: ""
	I1026 02:02:57.109358   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:02:57.109406   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:02:57.112981   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:02:57.113039   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:02:57.149339   61346 cri.go:89] found id: ""
	I1026 02:02:57.149361   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.149369   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:02:57.149374   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:02:57.149430   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:02:57.181958   61346 cri.go:89] found id: ""
	I1026 02:02:57.181985   61346 logs.go:282] 0 containers: []
	W1026 02:02:57.181993   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:02:57.182005   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:02:57.182017   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:02:57.244186   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:02:57.244202   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:02:57.244218   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:02:57.318673   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:02:57.318705   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:02:57.332355   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:02:57.332390   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:02:57.368229   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:02:57.368260   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:02:57.404874   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:02:57.404905   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:02:57.439449   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:02:57.439476   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:02:57.470512   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:02:57.470541   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:02:57.700847   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:02:57.700889   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:02:57.746463   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:02:57.746493   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:02:57.581611   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:00.357608   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:00.358300   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:00.358361   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:00.358420   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:00.394371   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:00.394395   61346 cri.go:89] found id: ""
	I1026 02:03:00.394403   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:00.394458   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.398160   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:00.398213   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:00.429928   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:00.429955   61346 cri.go:89] found id: ""
	I1026 02:03:00.429965   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:00.430021   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.433716   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:00.433779   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:00.468245   61346 cri.go:89] found id: ""
	I1026 02:03:00.468272   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.468279   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:00.468285   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:00.468333   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:00.502849   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:00.502882   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:00.502888   61346 cri.go:89] found id: ""
	I1026 02:03:00.502898   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:00.502956   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.506808   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.510244   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:00.510300   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:00.550741   61346 cri.go:89] found id: ""
	I1026 02:03:00.550774   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.550784   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:00.550791   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:00.550857   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:00.583299   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:00.583329   61346 cri.go:89] found id: ""
	I1026 02:03:00.583339   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:00.583395   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:00.587382   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:00.587449   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:00.620327   61346 cri.go:89] found id: ""
	I1026 02:03:00.620355   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.620364   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:00.620369   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:00.620422   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:00.653640   61346 cri.go:89] found id: ""
	I1026 02:03:00.653674   61346 logs.go:282] 0 containers: []
	W1026 02:03:00.653684   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:00.653702   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:00.653716   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:00.891000   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:00.891039   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:01.005914   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:01.005950   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:01.040332   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:01.040363   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:01.079913   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:01.079949   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:01.159277   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:01.159313   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:01.193303   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:01.193330   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:01.225091   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:01.225116   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:01.238709   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:01.238748   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:01.306043   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:01.306068   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:01.306082   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:00.657603   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:03.842485   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:03.843131   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:03.843190   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:03.843238   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:03.877861   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:03.877899   61346 cri.go:89] found id: ""
	I1026 02:03:03.877907   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:03.877969   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.881614   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:03.881674   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:03.912267   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:03.912288   61346 cri.go:89] found id: ""
	I1026 02:03:03.912296   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:03.912345   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.916002   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:03.916068   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:03.953650   61346 cri.go:89] found id: ""
	I1026 02:03:03.953680   61346 logs.go:282] 0 containers: []
	W1026 02:03:03.953690   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:03.953697   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:03.953745   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:03.985924   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:03.985945   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:03.985949   61346 cri.go:89] found id: ""
	I1026 02:03:03.985955   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:03.986000   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.989679   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:03.992904   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:03.992975   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:04.026028   61346 cri.go:89] found id: ""
	I1026 02:03:04.026051   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.026059   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:04.026064   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:04.026119   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:04.057365   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:04.057386   61346 cri.go:89] found id: ""
	I1026 02:03:04.057394   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:04.057459   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:04.060937   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:04.060990   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:04.091927   61346 cri.go:89] found id: ""
	I1026 02:03:04.091954   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.091964   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:04.091972   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:04.092033   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:04.126411   61346 cri.go:89] found id: ""
	I1026 02:03:04.126440   61346 logs.go:282] 0 containers: []
	W1026 02:03:04.126450   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:04.126463   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:04.126474   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:04.139393   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:04.139418   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:04.203573   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:04.203604   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:04.203625   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:04.239564   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:04.239594   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:04.275438   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:04.275465   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:04.307496   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:04.307521   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:04.345604   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:04.345636   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:04.455278   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:04.455316   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:04.499032   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:04.499062   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:04.571494   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:04.571532   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:07.300160   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:07.300782   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:07.300840   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:03:07.300889   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:03:07.340367   61346 cri.go:89] found id: "d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:07.340386   61346 cri.go:89] found id: ""
	I1026 02:03:07.340393   61346 logs.go:282] 1 containers: [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738]
	I1026 02:03:07.340438   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.344049   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:03:07.344122   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:03:07.375434   61346 cri.go:89] found id: "81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:07.375461   61346 cri.go:89] found id: ""
	I1026 02:03:07.375471   61346 logs.go:282] 1 containers: [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e]
	I1026 02:03:07.375525   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.379051   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:03:07.379117   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:03:07.411223   61346 cri.go:89] found id: ""
	I1026 02:03:07.411251   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.411261   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:03:07.411268   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:03:07.411331   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:03:07.443527   61346 cri.go:89] found id: "6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:07.443547   61346 cri.go:89] found id: "169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:07.443550   61346 cri.go:89] found id: ""
	I1026 02:03:07.443557   61346 logs.go:282] 2 containers: [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14]
	I1026 02:03:07.443604   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.447208   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.450644   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:03:07.450701   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:03:07.482703   61346 cri.go:89] found id: ""
	I1026 02:03:07.482727   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.482735   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:03:07.482740   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:03:07.482782   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:03:07.518953   61346 cri.go:89] found id: "2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:07.518986   61346 cri.go:89] found id: ""
	I1026 02:03:07.518995   61346 logs.go:282] 1 containers: [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1]
	I1026 02:03:07.519051   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:03:07.522859   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:03:07.522928   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:03:07.555058   61346 cri.go:89] found id: ""
	I1026 02:03:07.555083   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.555091   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:03:07.555100   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:03:07.555148   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:03:07.588175   61346 cri.go:89] found id: ""
	I1026 02:03:07.588209   61346 logs.go:282] 0 containers: []
	W1026 02:03:07.588221   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:03:07.588238   61346 logs.go:123] Gathering logs for etcd [81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e] ...
	I1026 02:03:07.588252   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81fa43e016b841e8052874f95db1808c2ec005e0f70a3ac26125abd8909ddd3e"
	I1026 02:03:07.626373   61346 logs.go:123] Gathering logs for kube-scheduler [6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8] ...
	I1026 02:03:07.626404   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6db4a8e3b319c7a47fe963a85fd92d11f6a8be24936c937f8e833b97e9f926a8"
	I1026 02:03:07.708119   61346 logs.go:123] Gathering logs for kube-scheduler [169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14] ...
	I1026 02:03:07.708152   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169168f6c50676d9a4ef7e8ed945d8d299a499a5885a476007d50a441c3cdd14"
	I1026 02:03:07.740472   61346 logs.go:123] Gathering logs for kube-apiserver [d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738] ...
	I1026 02:03:07.740497   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a3a5ad528d73ab158a0961bb61109153f59f49231e624959877206a329e738"
	I1026 02:03:07.780052   61346 logs.go:123] Gathering logs for kube-controller-manager [2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1] ...
	I1026 02:03:07.780079   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b91dedb9398697635f3d8fd0e3e13fca70bc94c068282932a31632e20b2f7d1"
	I1026 02:03:07.816183   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:03:07.816210   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:03:06.733714   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:08.052390   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:03:08.052426   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:03:08.095417   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:03:08.095454   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:03:08.216568   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:03:08.216618   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:03:08.230951   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:03:08.230979   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:03:08.297568   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:03:10.798588   61346 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I1026 02:03:10.799205   61346 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I1026 02:03:10.799274   61346 kubeadm.go:597] duration metric: took 4m3.515432127s to restartPrimaryControlPlane
	W1026 02:03:10.799354   61346 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 02:03:10.799383   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:03:11.501079   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:03:11.518136   61346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:03:11.527605   61346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:03:11.536460   61346 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:03:11.536479   61346 kubeadm.go:157] found existing configuration files:
	
	I1026 02:03:11.536523   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:03:11.544839   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:03:11.544889   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:03:11.553411   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:03:11.561760   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:03:11.561802   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:03:11.570406   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:03:11.578760   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:03:11.578822   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:03:11.588073   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:03:11.596725   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:03:11.596776   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:03:11.605769   61346 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:03:11.648887   61346 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:03:11.648956   61346 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:03:11.753470   61346 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:03:11.753649   61346 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:03:11.753759   61346 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:03:11.761141   61346 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:03:11.763476   61346 out.go:235]   - Generating certificates and keys ...
	I1026 02:03:11.763567   61346 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:03:11.763620   61346 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:03:11.763704   61346 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:03:11.763781   61346 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:03:11.763863   61346 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:03:11.763910   61346 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:03:11.763967   61346 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:03:11.764057   61346 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:03:11.764184   61346 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:03:11.764287   61346 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:03:11.764341   61346 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:03:11.764429   61346 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:03:11.893012   61346 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:03:12.350442   61346 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:03:12.597456   61346 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:03:12.817591   61346 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:03:12.974600   61346 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:03:12.975140   61346 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:03:12.980868   61346 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:03:09.805710   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:12.982664   61346 out.go:235]   - Booting up control plane ...
	I1026 02:03:12.982772   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:03:12.982838   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:03:12.982894   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:03:13.006027   61346 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:03:13.014869   61346 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:03:13.014965   61346 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:03:13.148493   61346 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:03:13.148661   61346 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:03:14.150279   61346 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001868121s
	I1026 02:03:14.150400   61346 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:03:15.885692   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:18.957632   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:25.037631   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:28.109646   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:34.189735   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:37.261708   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:43.341685   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:46.413685   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:52.493669   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:03:55.565708   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:04:01.645704   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:04:04.717681   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:04:10.797688   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:04:13.869697   62203 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.9:22: connect: no route to host
	I1026 02:04:16.874548   62379 start.go:364] duration metric: took 4m18.24452859s to acquireMachinesLock for "embed-certs-767480"
	I1026 02:04:16.874608   62379 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:04:16.874619   62379 fix.go:54] fixHost starting: 
	I1026 02:04:16.874997   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:16.875039   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:16.890823   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I1026 02:04:16.891332   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:16.891872   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:16.891892   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:16.892216   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:16.892384   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:16.892530   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 02:04:16.894281   62379 fix.go:112] recreateIfNeeded on embed-certs-767480: state=Stopped err=<nil>
	I1026 02:04:16.894320   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	W1026 02:04:16.894480   62379 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:04:16.896308   62379 out.go:177] * Restarting existing kvm2 VM for "embed-certs-767480" ...
	I1026 02:04:16.897733   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Start
	I1026 02:04:16.897900   62379 main.go:141] libmachine: (embed-certs-767480) Ensuring networks are active...
	I1026 02:04:16.898702   62379 main.go:141] libmachine: (embed-certs-767480) Ensuring network default is active
	I1026 02:04:16.899217   62379 main.go:141] libmachine: (embed-certs-767480) Ensuring network mk-embed-certs-767480 is active
	I1026 02:04:16.899646   62379 main.go:141] libmachine: (embed-certs-767480) Getting domain xml...
	I1026 02:04:16.900332   62379 main.go:141] libmachine: (embed-certs-767480) Creating domain...
	I1026 02:04:18.104072   62379 main.go:141] libmachine: (embed-certs-767480) Waiting to get IP...
	I1026 02:04:18.104909   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:18.105330   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:18.105406   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:18.105323   63558 retry.go:31] will retry after 214.192743ms: waiting for machine to come up
	I1026 02:04:18.320838   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:18.321331   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:18.321354   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:18.321285   63558 retry.go:31] will retry after 311.708184ms: waiting for machine to come up
	I1026 02:04:16.872194   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:04:16.872253   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetMachineName
	I1026 02:04:16.872574   62203 buildroot.go:166] provisioning hostname "no-preload-093148"
	I1026 02:04:16.872601   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetMachineName
	I1026 02:04:16.872801   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:04:16.874417   62203 machine.go:96] duration metric: took 4m37.43375326s to provisionDockerMachine
	I1026 02:04:16.874465   62203 fix.go:56] duration metric: took 4m37.454039539s for fixHost
	I1026 02:04:16.874474   62203 start.go:83] releasing machines lock for "no-preload-093148", held for 4m37.454064694s
	W1026 02:04:16.874492   62203 start.go:714] error starting host: provision: host is not running
	W1026 02:04:16.874588   62203 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1026 02:04:16.874597   62203 start.go:729] Will try again in 5 seconds ...
	I1026 02:04:18.634725   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:18.635223   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:18.635246   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:18.635180   63558 retry.go:31] will retry after 398.627778ms: waiting for machine to come up
	I1026 02:04:19.035857   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:19.036372   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:19.036394   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:19.036324   63558 retry.go:31] will retry after 436.990611ms: waiting for machine to come up
	I1026 02:04:19.474841   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:19.475229   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:19.475271   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:19.475210   63558 retry.go:31] will retry after 505.376068ms: waiting for machine to come up
	I1026 02:04:19.981829   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:19.982300   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:19.982340   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:19.982214   63558 retry.go:31] will retry after 833.243666ms: waiting for machine to come up
	I1026 02:04:20.816780   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:20.817273   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:20.817305   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:20.817223   63558 retry.go:31] will retry after 1.022104478s: waiting for machine to come up
	I1026 02:04:21.841296   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:21.841745   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:21.841773   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:21.841712   63558 retry.go:31] will retry after 1.267163141s: waiting for machine to come up
	I1026 02:04:23.110418   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:23.110923   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:23.110954   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:23.110874   63558 retry.go:31] will retry after 1.523853006s: waiting for machine to come up
	I1026 02:04:21.876720   62203 start.go:360] acquireMachinesLock for no-preload-093148: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:04:24.636550   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:24.637021   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:24.637050   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:24.636970   63558 retry.go:31] will retry after 1.960487998s: waiting for machine to come up
	I1026 02:04:26.600272   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:26.600766   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:26.600790   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:26.600727   63558 retry.go:31] will retry after 2.883124816s: waiting for machine to come up
	I1026 02:04:29.487183   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:29.487603   62379 main.go:141] libmachine: (embed-certs-767480) DBG | unable to find current IP address of domain embed-certs-767480 in network mk-embed-certs-767480
	I1026 02:04:29.487624   62379 main.go:141] libmachine: (embed-certs-767480) DBG | I1026 02:04:29.487575   63558 retry.go:31] will retry after 3.440703508s: waiting for machine to come up
	I1026 02:04:32.929349   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:32.929714   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has current primary IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:32.929747   62379 main.go:141] libmachine: (embed-certs-767480) Found IP for machine: 192.168.61.84
	I1026 02:04:32.929762   62379 main.go:141] libmachine: (embed-certs-767480) Reserving static IP address...
	I1026 02:04:32.930175   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "embed-certs-767480", mac: "52:54:00:0d:bc:1b", ip: "192.168.61.84"} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:32.930203   62379 main.go:141] libmachine: (embed-certs-767480) DBG | skip adding static IP to network mk-embed-certs-767480 - found existing host DHCP lease matching {name: "embed-certs-767480", mac: "52:54:00:0d:bc:1b", ip: "192.168.61.84"}
	I1026 02:04:32.930216   62379 main.go:141] libmachine: (embed-certs-767480) Reserved static IP address: 192.168.61.84
	I1026 02:04:32.930225   62379 main.go:141] libmachine: (embed-certs-767480) Waiting for SSH to be available...
	I1026 02:04:32.930236   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Getting to WaitForSSH function...
	I1026 02:04:32.932332   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:32.932825   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:32.932854   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:32.933007   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Using SSH client type: external
	I1026 02:04:32.933025   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa (-rw-------)
	I1026 02:04:32.933075   62379 main.go:141] libmachine: (embed-certs-767480) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:04:32.933092   62379 main.go:141] libmachine: (embed-certs-767480) DBG | About to run SSH command:
	I1026 02:04:32.933103   62379 main.go:141] libmachine: (embed-certs-767480) DBG | exit 0
	I1026 02:04:33.057163   62379 main.go:141] libmachine: (embed-certs-767480) DBG | SSH cmd err, output: <nil>: 
	I1026 02:04:33.057538   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetConfigRaw
	I1026 02:04:33.058108   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetIP
	I1026 02:04:33.060793   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.061201   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.061233   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.061487   62379 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/config.json ...
	I1026 02:04:33.061685   62379 machine.go:93] provisionDockerMachine start ...
	I1026 02:04:33.061703   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:33.061902   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.064050   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.064369   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.064404   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.064522   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:33.064745   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.064938   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.065054   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:33.065195   62379 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:33.065393   62379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I1026 02:04:33.065405   62379 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:04:33.165323   62379 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:04:33.165356   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetMachineName
	I1026 02:04:33.165595   62379 buildroot.go:166] provisioning hostname "embed-certs-767480"
	I1026 02:04:33.165621   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetMachineName
	I1026 02:04:33.165743   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.168307   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.168655   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.168680   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.168799   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:33.168964   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.169118   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.169253   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:33.169408   62379 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:33.169639   62379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I1026 02:04:33.169656   62379 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-767480 && echo "embed-certs-767480" | sudo tee /etc/hostname
	I1026 02:04:33.284345   62379 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-767480
	
	I1026 02:04:33.284371   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.286907   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.287189   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.287219   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.287325   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:33.287503   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.287666   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.287767   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:33.287883   62379 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:33.288087   62379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I1026 02:04:33.288104   62379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-767480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-767480/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-767480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:04:33.397577   62379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:04:33.397617   62379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:04:33.397642   62379 buildroot.go:174] setting up certificates
	I1026 02:04:33.397651   62379 provision.go:84] configureAuth start
	I1026 02:04:33.397663   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetMachineName
	I1026 02:04:33.397972   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetIP
	I1026 02:04:33.400584   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.400987   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.401015   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.401155   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.403313   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.403598   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.403629   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.403833   62379 provision.go:143] copyHostCerts
	I1026 02:04:33.403913   62379 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:04:33.403926   62379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:04:33.403996   62379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:04:33.404088   62379 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:04:33.404096   62379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:04:33.404120   62379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:04:33.404176   62379 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:04:33.404183   62379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:04:33.404207   62379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:04:33.404272   62379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.embed-certs-767480 san=[127.0.0.1 192.168.61.84 embed-certs-767480 localhost minikube]
	I1026 02:04:34.301944   62745 start.go:364] duration metric: took 3m55.01831188s to acquireMachinesLock for "old-k8s-version-385716"
	I1026 02:04:34.302015   62745 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:04:34.302023   62745 fix.go:54] fixHost starting: 
	I1026 02:04:34.302483   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:34.302539   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:34.319621   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1026 02:04:34.320093   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:34.320633   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:04:34.320663   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:34.321018   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:34.321191   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:34.321343   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetState
	I1026 02:04:34.322823   62745 fix.go:112] recreateIfNeeded on old-k8s-version-385716: state=Stopped err=<nil>
	I1026 02:04:34.322854   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	W1026 02:04:34.323009   62745 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:04:34.324931   62745 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-385716" ...
	I1026 02:04:33.713002   62379 provision.go:177] copyRemoteCerts
	I1026 02:04:33.713058   62379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:04:33.713084   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.715888   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.716213   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.716238   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.716411   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:33.716611   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.716757   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:33.716883   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:33.795178   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:04:33.817607   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 02:04:33.839026   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:04:33.860320   62379 provision.go:87] duration metric: took 462.653507ms to configureAuth
	I1026 02:04:33.860349   62379 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:04:33.860543   62379 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:04:33.860636   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:33.863365   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.863733   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:33.863766   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:33.863890   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:33.864086   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.864252   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:33.864366   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:33.864528   62379 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:33.864742   62379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I1026 02:04:33.864763   62379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:04:34.076777   62379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:04:34.076801   62379 machine.go:96] duration metric: took 1.015104389s to provisionDockerMachine
	I1026 02:04:34.076813   62379 start.go:293] postStartSetup for "embed-certs-767480" (driver="kvm2")
	I1026 02:04:34.076822   62379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:04:34.076836   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:34.077186   62379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:04:34.077223   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:34.079753   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.080085   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:34.080122   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.080238   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:34.080437   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:34.080586   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:34.080697   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:34.159599   62379 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:04:34.163388   62379 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:04:34.163421   62379 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:04:34.163505   62379 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:04:34.163605   62379 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:04:34.163718   62379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:04:34.172802   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:04:34.198776   62379 start.go:296] duration metric: took 121.948829ms for postStartSetup
	I1026 02:04:34.198833   62379 fix.go:56] duration metric: took 17.324214326s for fixHost
	I1026 02:04:34.198859   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:34.201515   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.201865   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:34.201902   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.202105   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:34.202299   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:34.202432   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:34.202543   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:34.202672   62379 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:34.202847   62379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.84 22 <nil> <nil>}
	I1026 02:04:34.202857   62379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:04:34.301732   62379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729908274.274923252
	
	I1026 02:04:34.301758   62379 fix.go:216] guest clock: 1729908274.274923252
	I1026 02:04:34.301767   62379 fix.go:229] Guest: 2024-10-26 02:04:34.274923252 +0000 UTC Remote: 2024-10-26 02:04:34.19883864 +0000 UTC m=+275.711800785 (delta=76.084612ms)
	I1026 02:04:34.301807   62379 fix.go:200] guest clock delta is within tolerance: 76.084612ms
	I1026 02:04:34.301816   62379 start.go:83] releasing machines lock for "embed-certs-767480", held for 17.427230112s
	I1026 02:04:34.301851   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:34.302131   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetIP
	I1026 02:04:34.305033   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.305462   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:34.305487   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.305669   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:34.306133   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:34.306284   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:34.306381   62379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:04:34.306421   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:34.306443   62379 ssh_runner.go:195] Run: cat /version.json
	I1026 02:04:34.306468   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:34.309151   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.309468   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.309523   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:34.309555   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.309661   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:34.309814   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:34.309940   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:34.309969   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:34.310000   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:34.310092   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:34.310189   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:34.310245   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:34.310365   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:34.310510   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:34.385847   62379 ssh_runner.go:195] Run: systemctl --version
	I1026 02:04:34.417283   62379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:04:34.556776   62379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:04:34.562445   62379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:04:34.562505   62379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:04:34.577785   62379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:04:34.577812   62379 start.go:495] detecting cgroup driver to use...
	I1026 02:04:34.577892   62379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:04:34.594058   62379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:04:34.606970   62379 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:04:34.607044   62379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:04:34.620404   62379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:04:34.634264   62379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:04:34.749922   62379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:04:34.919110   62379 docker.go:233] disabling docker service ...
	I1026 02:04:34.919181   62379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:04:34.932437   62379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:04:34.949211   62379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:04:35.077036   62379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:04:35.207905   62379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:04:35.221249   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:04:35.238016   62379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:04:35.238076   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.247931   62379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:04:35.247998   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.259243   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.269144   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.278851   62379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:04:35.288737   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.298294   62379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.318118   62379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:35.333847   62379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:04:35.342803   62379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:04:35.342862   62379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:04:35.353998   62379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:04:35.362776   62379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:04:35.469785   62379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:04:35.563595   62379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:04:35.563661   62379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:04:35.568132   62379 start.go:563] Will wait 60s for crictl version
	I1026 02:04:35.568182   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:04:35.572419   62379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:04:35.621325   62379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:04:35.621411   62379 ssh_runner.go:195] Run: crio --version
	I1026 02:04:35.653129   62379 ssh_runner.go:195] Run: crio --version
	I1026 02:04:35.685283   62379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:04:35.686640   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetIP
	I1026 02:04:35.689403   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:35.689770   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:35.689797   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:35.689963   62379 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 02:04:35.693894   62379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:04:35.705658   62379 kubeadm.go:883] updating cluster {Name:embed-certs-767480 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-767480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:04:35.705816   62379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:04:35.705885   62379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:35.743828   62379 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:04:35.743913   62379 ssh_runner.go:195] Run: which lz4
	I1026 02:04:35.748100   62379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:04:35.752131   62379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:04:35.752160   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:04:36.970302   62379 crio.go:462] duration metric: took 1.222286622s to copy over tarball
	I1026 02:04:36.970380   62379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:04:34.326257   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .Start
	I1026 02:04:34.326406   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring networks are active...
	I1026 02:04:34.327154   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network default is active
	I1026 02:04:34.327468   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network mk-old-k8s-version-385716 is active
	I1026 02:04:34.327843   62745 main.go:141] libmachine: (old-k8s-version-385716) Getting domain xml...
	I1026 02:04:34.328494   62745 main.go:141] libmachine: (old-k8s-version-385716) Creating domain...
	I1026 02:04:35.570715   62745 main.go:141] libmachine: (old-k8s-version-385716) Waiting to get IP...
	I1026 02:04:35.571457   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:35.571935   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:35.572026   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:35.571914   63673 retry.go:31] will retry after 229.540157ms: waiting for machine to come up
	I1026 02:04:35.803476   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:35.803988   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:35.804009   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:35.803941   63673 retry.go:31] will retry after 271.688891ms: waiting for machine to come up
	I1026 02:04:36.077522   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:36.078096   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:36.078125   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:36.078053   63673 retry.go:31] will retry after 374.365537ms: waiting for machine to come up
	I1026 02:04:36.453868   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:36.454427   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:36.454456   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:36.454381   63673 retry.go:31] will retry after 578.001931ms: waiting for machine to come up
	I1026 02:04:37.034042   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:37.034553   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:37.034585   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:37.034473   63673 retry.go:31] will retry after 469.528312ms: waiting for machine to come up
	I1026 02:04:37.505236   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:37.505849   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:37.505885   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:37.505822   63673 retry.go:31] will retry after 826.394258ms: waiting for machine to come up
	I1026 02:04:38.333978   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:38.334380   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:38.334410   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:38.334336   63673 retry.go:31] will retry after 731.652813ms: waiting for machine to come up
	I1026 02:04:39.067272   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:39.067750   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:39.067777   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:39.067697   63673 retry.go:31] will retry after 1.141938018s: waiting for machine to come up
	I1026 02:04:39.103133   62379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132726331s)
	I1026 02:04:39.103159   62379 crio.go:469] duration metric: took 2.132828425s to extract the tarball
	I1026 02:04:39.103166   62379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:04:39.139795   62379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:39.182210   62379 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:04:39.182231   62379 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:04:39.182239   62379 kubeadm.go:934] updating node { 192.168.61.84 8443 v1.31.2 crio true true} ...
	I1026 02:04:39.182390   62379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-767480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-767480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:04:39.182485   62379 ssh_runner.go:195] Run: crio config
	I1026 02:04:39.224658   62379 cni.go:84] Creating CNI manager for ""
	I1026 02:04:39.224684   62379 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:04:39.224695   62379 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:04:39.224732   62379 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.84 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-767480 NodeName:embed-certs-767480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:04:39.224910   62379 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-767480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.84"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.84"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:04:39.224996   62379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:04:39.234614   62379 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:04:39.234695   62379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:04:39.243872   62379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1026 02:04:39.260697   62379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:04:39.276250   62379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1026 02:04:39.293527   62379 ssh_runner.go:195] Run: grep 192.168.61.84	control-plane.minikube.internal$ /etc/hosts
	I1026 02:04:39.297041   62379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:04:39.308709   62379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:04:39.410410   62379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:04:39.426251   62379 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480 for IP: 192.168.61.84
	I1026 02:04:39.426272   62379 certs.go:194] generating shared ca certs ...
	I1026 02:04:39.426304   62379 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:04:39.426441   62379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:04:39.426490   62379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:04:39.426506   62379 certs.go:256] generating profile certs ...
	I1026 02:04:39.426588   62379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/client.key
	I1026 02:04:39.426635   62379 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/apiserver.key.fd05abfd
	I1026 02:04:39.426685   62379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/proxy-client.key
	I1026 02:04:39.426796   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:04:39.426825   62379 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:04:39.426834   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:04:39.426861   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:04:39.426889   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:04:39.426919   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:04:39.426981   62379 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:04:39.427613   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:04:39.465254   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:04:39.507613   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:04:39.559165   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:04:39.589585   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 02:04:39.616010   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 02:04:39.638003   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:04:39.661057   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:04:39.683415   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:04:39.705306   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:04:39.726818   62379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:04:39.747995   62379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:04:39.763422   62379 ssh_runner.go:195] Run: openssl version
	I1026 02:04:39.769290   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:04:39.780603   62379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:04:39.784824   62379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:04:39.784890   62379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:04:39.790431   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:04:39.801433   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:04:39.811458   62379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:04:39.815705   62379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:04:39.815764   62379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:04:39.821099   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:04:39.830995   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:04:39.840868   62379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:04:39.845156   62379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:04:39.845216   62379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:04:39.850839   62379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:04:39.861742   62379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:04:39.866159   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:04:39.872059   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:04:39.877792   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:04:39.883403   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:04:39.888795   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:04:39.894386   62379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:04:39.899776   62379 kubeadm.go:392] StartCluster: {Name:embed-certs-767480 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-767480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:04:39.899880   62379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:04:39.899936   62379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:04:39.940228   62379 cri.go:89] found id: ""
	I1026 02:04:39.940322   62379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:04:39.950440   62379 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:04:39.950469   62379 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:04:39.950527   62379 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:04:39.960412   62379 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:04:39.961394   62379 kubeconfig.go:125] found "embed-certs-767480" server: "https://192.168.61.84:8443"
	I1026 02:04:39.963321   62379 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:04:39.972883   62379 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.84
	I1026 02:04:39.972917   62379 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:04:39.972936   62379 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:04:39.972987   62379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:04:40.009184   62379 cri.go:89] found id: ""
	I1026 02:04:40.009265   62379 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:04:40.024768   62379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:04:40.033870   62379 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:04:40.033895   62379 kubeadm.go:157] found existing configuration files:
	
	I1026 02:04:40.033945   62379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:04:40.042827   62379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:04:40.042895   62379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:04:40.052758   62379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:04:40.061358   62379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:04:40.061440   62379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:04:40.070364   62379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:04:40.079014   62379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:04:40.079071   62379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:04:40.088076   62379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:04:40.096555   62379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:04:40.096612   62379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:04:40.105393   62379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:04:40.114333   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:40.212685   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:41.294409   62379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.081686074s)
	I1026 02:04:41.294452   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:41.509558   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:41.575677   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:41.651389   62379 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:04:41.651467   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:42.151689   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:42.652540   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:43.151543   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:40.211539   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:40.211930   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:40.211987   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:40.211906   63673 retry.go:31] will retry after 1.591834442s: waiting for machine to come up
	I1026 02:04:41.805096   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:41.805608   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:41.805638   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:41.805563   63673 retry.go:31] will retry after 2.248972392s: waiting for machine to come up
	I1026 02:04:44.055913   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:44.056399   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:44.056429   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:44.056350   63673 retry.go:31] will retry after 1.748696748s: waiting for machine to come up
	I1026 02:04:43.652122   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:43.668077   62379 api_server.go:72] duration metric: took 2.016683798s to wait for apiserver process to appear ...
	I1026 02:04:43.668106   62379 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:04:43.668130   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:46.623308   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:04:46.623333   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:04:46.623345   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:46.638917   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:04:46.638957   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:04:46.669103   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:46.678332   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:04:46.678361   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:04:47.168386   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:47.174364   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:04:47.174394   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:04:47.669091   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:47.678488   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:04:47.678510   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:04:48.168756   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:48.178014   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:04:48.178047   62379 api_server.go:103] status: https://192.168.61.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:04:48.668811   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:04:48.673915   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I1026 02:04:48.681313   62379 api_server.go:141] control plane version: v1.31.2
	I1026 02:04:48.681335   62379 api_server.go:131] duration metric: took 5.013222188s to wait for apiserver health ...
	I1026 02:04:48.681343   62379 cni.go:84] Creating CNI manager for ""
	I1026 02:04:48.681349   62379 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:04:48.683099   62379 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:04:45.806729   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:45.807252   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:45.807282   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:45.807210   63673 retry.go:31] will retry after 2.585377512s: waiting for machine to come up
	I1026 02:04:48.396305   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:48.396788   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:48.396822   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:48.396742   63673 retry.go:31] will retry after 3.406908475s: waiting for machine to come up
	I1026 02:04:48.684449   62379 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:04:48.694050   62379 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:04:48.710062   62379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:04:48.730179   62379 system_pods.go:59] 8 kube-system pods found
	I1026 02:04:48.730208   62379 system_pods.go:61] "coredns-7c65d6cfc9-cs6fv" [05855bd2-58d5-4d83-b5b4-6b7d28b13957] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 02:04:48.730215   62379 system_pods.go:61] "etcd-embed-certs-767480" [4051ced7-363a-45fd-be21-ff185f16e2f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 02:04:48.730221   62379 system_pods.go:61] "kube-apiserver-embed-certs-767480" [04a9ea55-a86f-43b0-a784-0ea9418514c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 02:04:48.730226   62379 system_pods.go:61] "kube-controller-manager-embed-certs-767480" [c90949e8-8094-4535-8b16-5836fb6a6d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 02:04:48.730233   62379 system_pods.go:61] "kube-proxy-nlwh5" [e83fffc8-a912-4919-b5f6-ccc2745bf855] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 02:04:48.730240   62379 system_pods.go:61] "kube-scheduler-embed-certs-767480" [24749997-d237-4b45-9e45-609bac5f350c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 02:04:48.730245   62379 system_pods.go:61] "metrics-server-6867b74b74-c9cwx" [62a837f0-6fdb-418e-a5dd-e3196bb51346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:04:48.730251   62379 system_pods.go:61] "storage-provisioner" [e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 02:04:48.730262   62379 system_pods.go:74] duration metric: took 20.182556ms to wait for pod list to return data ...
	I1026 02:04:48.730271   62379 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:04:48.735204   62379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:04:48.735306   62379 node_conditions.go:123] node cpu capacity is 2
	I1026 02:04:48.735696   62379 node_conditions.go:105] duration metric: took 5.416755ms to run NodePressure ...
	I1026 02:04:48.735722   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:04:49.026277   62379 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1026 02:04:49.030121   62379 kubeadm.go:739] kubelet initialised
	I1026 02:04:49.030142   62379 kubeadm.go:740] duration metric: took 3.835976ms waiting for restarted kubelet to initialise ...
	I1026 02:04:49.030152   62379 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:04:49.034354   62379 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.039880   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.039899   62379 pod_ready.go:82] duration metric: took 5.525142ms for pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.039907   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.039914   62379 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.044573   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "etcd-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.044614   62379 pod_ready.go:82] duration metric: took 4.690779ms for pod "etcd-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.044625   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "etcd-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.044634   62379 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.049731   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.049749   62379 pod_ready.go:82] duration metric: took 5.106989ms for pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.049757   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.049762   62379 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.114035   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.114068   62379 pod_ready.go:82] duration metric: took 64.297827ms for pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.114077   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.114084   62379 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nlwh5" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.513133   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "kube-proxy-nlwh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.513162   62379 pod_ready.go:82] duration metric: took 399.070229ms for pod "kube-proxy-nlwh5" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.513173   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "kube-proxy-nlwh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.513182   62379 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:49.917054   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.917083   62379 pod_ready.go:82] duration metric: took 403.892542ms for pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:49.917093   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:49.917103   62379 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:50.312942   62379 pod_ready.go:98] node "embed-certs-767480" hosting pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:50.312971   62379 pod_ready.go:82] duration metric: took 395.85713ms for pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace to be "Ready" ...
	E1026 02:04:50.312982   62379 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-767480" hosting pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:50.312994   62379 pod_ready.go:39] duration metric: took 1.282829942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:04:50.313014   62379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:04:50.323896   62379 ops.go:34] apiserver oom_adj: -16
	I1026 02:04:50.323922   62379 kubeadm.go:597] duration metric: took 10.37343953s to restartPrimaryControlPlane
	I1026 02:04:50.323931   62379 kubeadm.go:394] duration metric: took 10.424160591s to StartCluster
	I1026 02:04:50.323949   62379 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:04:50.324037   62379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:04:50.325557   62379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:04:50.325805   62379 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.84 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:04:50.325876   62379 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:04:50.325990   62379 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:04:50.326004   62379 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-767480"
	I1026 02:04:50.326016   62379 addons.go:69] Setting default-storageclass=true in profile "embed-certs-767480"
	I1026 02:04:50.326031   62379 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-767480"
	I1026 02:04:50.326022   62379 addons.go:69] Setting metrics-server=true in profile "embed-certs-767480"
	W1026 02:04:50.326040   62379 addons.go:243] addon storage-provisioner should already be in state true
	I1026 02:04:50.326044   62379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-767480"
	I1026 02:04:50.326046   62379 addons.go:234] Setting addon metrics-server=true in "embed-certs-767480"
	W1026 02:04:50.326071   62379 addons.go:243] addon metrics-server should already be in state true
	I1026 02:04:50.326077   62379 host.go:66] Checking if "embed-certs-767480" exists ...
	I1026 02:04:50.326097   62379 host.go:66] Checking if "embed-certs-767480" exists ...
	I1026 02:04:50.326462   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.326492   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.326500   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.326510   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.326524   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.326529   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.328634   62379 out.go:177] * Verifying Kubernetes components...
	I1026 02:04:50.330152   62379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:04:50.342244   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I1026 02:04:50.342262   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46497
	I1026 02:04:50.342276   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I1026 02:04:50.342611   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.342704   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.342709   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.343115   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.343134   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.343251   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.343262   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.343272   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.343277   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.343488   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.343598   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.343600   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.343711   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 02:04:50.344035   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.344070   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.344111   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.344150   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.347189   62379 addons.go:234] Setting addon default-storageclass=true in "embed-certs-767480"
	W1026 02:04:50.347211   62379 addons.go:243] addon default-storageclass should already be in state true
	I1026 02:04:50.347239   62379 host.go:66] Checking if "embed-certs-767480" exists ...
	I1026 02:04:50.347597   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.347634   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.359213   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I1026 02:04:50.359852   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.360411   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.360429   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.360809   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.360991   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 02:04:50.362905   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:50.363490   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I1026 02:04:50.364238   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.364736   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.364750   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.365098   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.365277   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 02:04:50.365401   62379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:04:50.366436   62379 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:04:50.366457   62379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:04:50.366476   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:50.366678   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I1026 02:04:50.367178   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:50.367254   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.367687   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.367718   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.368329   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.368582   62379 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 02:04:50.369004   62379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:50.369044   62379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:50.369396   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.369665   62379 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 02:04:50.369685   62379 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 02:04:50.369705   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:50.369828   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:50.369850   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.370008   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:50.370148   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:50.370251   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:50.370371   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:50.372330   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.372685   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:50.372708   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.372832   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:50.373000   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:50.373159   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:50.373283   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:50.384641   62379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I1026 02:04:50.385047   62379 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:50.385548   62379 main.go:141] libmachine: Using API Version  1
	I1026 02:04:50.385558   62379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:50.385833   62379 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:50.386007   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 02:04:50.387487   62379 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 02:04:50.387669   62379 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:04:50.387679   62379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:04:50.387690   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 02:04:50.390756   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.391165   62379 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 03:04:27 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 02:04:50.391176   62379 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 02:04:50.391324   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 02:04:50.391427   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 02:04:50.391495   62379 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 02:04:50.391555   62379 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 02:04:50.531325   62379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:04:50.573635   62379 node_ready.go:35] waiting up to 6m0s for node "embed-certs-767480" to be "Ready" ...
	I1026 02:04:50.611358   62379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:04:50.623875   62379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:04:50.741773   62379 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 02:04:50.741796   62379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 02:04:50.815220   62379 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 02:04:50.815243   62379 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 02:04:50.858916   62379 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:04:50.858944   62379 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 02:04:50.884939   62379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:04:50.910325   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:50.910358   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:50.910643   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Closing plugin on server side
	I1026 02:04:50.910651   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:50.910666   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:50.910678   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:50.910686   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:50.910932   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:50.910943   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:50.910969   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Closing plugin on server side
	I1026 02:04:50.916472   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:50.916492   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:50.916776   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Closing plugin on server side
	I1026 02:04:50.916807   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:50.916822   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:51.534125   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:51.534154   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:51.534428   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Closing plugin on server side
	I1026 02:04:51.534513   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:51.534531   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:51.534546   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:51.534583   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:51.534836   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:51.534852   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:51.626691   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:51.626715   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:51.627012   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:51.627027   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:51.627028   62379 main.go:141] libmachine: (embed-certs-767480) DBG | Closing plugin on server side
	I1026 02:04:51.627041   62379 main.go:141] libmachine: Making call to close driver server
	I1026 02:04:51.627050   62379 main.go:141] libmachine: (embed-certs-767480) Calling .Close
	I1026 02:04:51.627271   62379 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:04:51.627283   62379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:04:51.627293   62379 addons.go:475] Verifying addon metrics-server=true in "embed-certs-767480"
	I1026 02:04:51.630135   62379 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1026 02:04:53.093822   62203 start.go:364] duration metric: took 31.217040954s to acquireMachinesLock for "no-preload-093148"
	I1026 02:04:53.093880   62203 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:04:53.093892   62203 fix.go:54] fixHost starting: 
	I1026 02:04:53.094325   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:53.094359   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:53.113667   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1026 02:04:53.114044   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:53.114455   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:04:53.114475   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:53.114822   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:53.115006   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:04:53.115139   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 02:04:53.116606   62203 fix.go:112] recreateIfNeeded on no-preload-093148: state=Stopped err=<nil>
	I1026 02:04:53.116628   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	W1026 02:04:53.116799   62203 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:04:53.118627   62203 out.go:177] * Restarting existing kvm2 VM for "no-preload-093148" ...
	I1026 02:04:51.631486   62379 addons.go:510] duration metric: took 1.305615369s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1026 02:04:52.576790   62379 node_ready.go:53] node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:51.806766   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.807223   62745 main.go:141] libmachine: (old-k8s-version-385716) Found IP for machine: 192.168.39.33
	I1026 02:04:51.807244   62745 main.go:141] libmachine: (old-k8s-version-385716) Reserving static IP address...
	I1026 02:04:51.807260   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has current primary IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.807631   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "old-k8s-version-385716", mac: "52:54:00:f3:3d:37", ip: "192.168.39.33"} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.807660   62745 main.go:141] libmachine: (old-k8s-version-385716) Reserved static IP address: 192.168.39.33
	I1026 02:04:51.807682   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | skip adding static IP to network mk-old-k8s-version-385716 - found existing host DHCP lease matching {name: "old-k8s-version-385716", mac: "52:54:00:f3:3d:37", ip: "192.168.39.33"}
	I1026 02:04:51.807702   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Getting to WaitForSSH function...
	I1026 02:04:51.807720   62745 main.go:141] libmachine: (old-k8s-version-385716) Waiting for SSH to be available...
	I1026 02:04:51.809812   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.810208   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.810240   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.810346   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH client type: external
	I1026 02:04:51.810374   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa (-rw-------)
	I1026 02:04:51.810409   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:04:51.810433   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | About to run SSH command:
	I1026 02:04:51.810447   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | exit 0
	I1026 02:04:51.933521   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | SSH cmd err, output: <nil>: 
	I1026 02:04:51.933852   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetConfigRaw
	I1026 02:04:51.934587   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:51.937932   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.938342   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.938376   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.938654   62745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 02:04:51.938912   62745 machine.go:93] provisionDockerMachine start ...
	I1026 02:04:51.938936   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:51.939142   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:51.941577   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.941907   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.941938   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.942101   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:51.942277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:51.942448   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:51.942577   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:51.942738   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:51.942988   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:51.943004   62745 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:04:52.041280   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:04:52.041310   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.041535   62745 buildroot.go:166] provisioning hostname "old-k8s-version-385716"
	I1026 02:04:52.041558   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.041750   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.044276   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.044625   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.044654   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.044794   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.044973   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.045125   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.045249   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.045402   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.045586   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.045601   62745 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-385716 && echo "old-k8s-version-385716" | sudo tee /etc/hostname
	I1026 02:04:52.158916   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-385716
	
	I1026 02:04:52.158952   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.161567   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.161930   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.161957   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.162150   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.162318   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.162443   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.162589   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.162739   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.162921   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.162937   62745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-385716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-385716/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-385716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:04:52.269922   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:04:52.269956   62745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:04:52.269995   62745 buildroot.go:174] setting up certificates
	I1026 02:04:52.270003   62745 provision.go:84] configureAuth start
	I1026 02:04:52.270012   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.270280   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:52.272938   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.273310   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.273346   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.273510   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.275383   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.275640   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.275672   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.275820   62745 provision.go:143] copyHostCerts
	I1026 02:04:52.275894   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:04:52.275912   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:04:52.275989   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:04:52.276115   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:04:52.276125   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:04:52.276158   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:04:52.276233   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:04:52.276242   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:04:52.276269   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:04:52.276336   62745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-385716 san=[127.0.0.1 192.168.39.33 localhost minikube old-k8s-version-385716]
	I1026 02:04:52.499439   62745 provision.go:177] copyRemoteCerts
	I1026 02:04:52.499509   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:04:52.499540   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.502255   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.502611   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.502652   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.502822   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.503012   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.503155   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.503272   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:52.587057   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 02:04:52.609360   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:04:52.630632   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:04:52.651916   62745 provision.go:87] duration metric: took 381.902063ms to configureAuth
	I1026 02:04:52.651946   62745 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:04:52.652125   62745 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 02:04:52.652208   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.654847   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.655123   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.655151   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.655334   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.655512   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.655665   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.655839   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.656009   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.656162   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.656177   62745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:04:52.869041   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:04:52.869062   62745 machine.go:96] duration metric: took 930.134589ms to provisionDockerMachine
	I1026 02:04:52.869073   62745 start.go:293] postStartSetup for "old-k8s-version-385716" (driver="kvm2")
	I1026 02:04:52.869086   62745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:04:52.869109   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:52.869393   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:04:52.869430   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.871942   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.872247   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.872274   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.872431   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.872627   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.872791   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.872931   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:52.951357   62745 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:04:52.955344   62745 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:04:52.955365   62745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:04:52.955428   62745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:04:52.955497   62745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:04:52.955581   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:04:52.965327   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:04:52.988688   62745 start.go:296] duration metric: took 119.602944ms for postStartSetup
	I1026 02:04:52.988728   62745 fix.go:56] duration metric: took 18.686705472s for fixHost
	I1026 02:04:52.988752   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.990958   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.991277   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.991305   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.991406   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.991593   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.991745   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.991877   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.992029   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.992178   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.992187   62745 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:04:53.093645   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729908293.069341261
	
	I1026 02:04:53.093666   62745 fix.go:216] guest clock: 1729908293.069341261
	I1026 02:04:53.093676   62745 fix.go:229] Guest: 2024-10-26 02:04:53.069341261 +0000 UTC Remote: 2024-10-26 02:04:52.988733346 +0000 UTC m=+253.848836792 (delta=80.607915ms)
	I1026 02:04:53.093701   62745 fix.go:200] guest clock delta is within tolerance: 80.607915ms
	I1026 02:04:53.093716   62745 start.go:83] releasing machines lock for "old-k8s-version-385716", held for 18.791723963s
	I1026 02:04:53.093747   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.094026   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:53.096804   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.097196   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.097232   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.097353   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.097855   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.098045   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.098101   62745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:04:53.098154   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:53.098250   62745 ssh_runner.go:195] Run: cat /version.json
	I1026 02:04:53.098277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:53.100486   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100774   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.100814   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100946   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100954   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:53.101122   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:53.101277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:53.101301   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.101338   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.101445   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:53.101546   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:53.101671   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:53.101812   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:53.101970   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:53.207938   62745 ssh_runner.go:195] Run: systemctl --version
	I1026 02:04:53.213560   62745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:04:53.354252   62745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:04:53.361628   62745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:04:53.361692   62745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:04:53.379919   62745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:04:53.379947   62745 start.go:495] detecting cgroup driver to use...
	I1026 02:04:53.380013   62745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:04:53.394591   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:04:53.407921   62745 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:04:53.407972   62745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:04:53.420732   62745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:04:53.433679   62745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:04:53.543848   62745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:04:53.696256   62745 docker.go:233] disabling docker service ...
	I1026 02:04:53.696335   62745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:04:53.712952   62745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:04:53.726273   62745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:04:53.869139   62745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:04:53.990619   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:04:54.003422   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:04:54.021067   62745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 02:04:54.021139   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.030585   62745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:04:54.030662   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.040121   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.049648   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.059293   62745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:04:54.069549   62745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:04:54.078429   62745 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:04:54.078477   62745 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:04:54.091600   62745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:04:54.100699   62745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:04:53.119740   62203 main.go:141] libmachine: (no-preload-093148) Calling .Start
	I1026 02:04:53.119910   62203 main.go:141] libmachine: (no-preload-093148) Ensuring networks are active...
	I1026 02:04:53.120542   62203 main.go:141] libmachine: (no-preload-093148) Ensuring network default is active
	I1026 02:04:53.120853   62203 main.go:141] libmachine: (no-preload-093148) Ensuring network mk-no-preload-093148 is active
	I1026 02:04:53.121186   62203 main.go:141] libmachine: (no-preload-093148) Getting domain xml...
	I1026 02:04:53.122079   62203 main.go:141] libmachine: (no-preload-093148) Creating domain...
	I1026 02:04:54.233461   62745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:04:54.319457   62745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:04:54.319533   62745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:04:54.324335   62745 start.go:563] Will wait 60s for crictl version
	I1026 02:04:54.324395   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:54.329603   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:04:54.381910   62745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:04:54.381985   62745 ssh_runner.go:195] Run: crio --version
	I1026 02:04:54.420254   62745 ssh_runner.go:195] Run: crio --version
	I1026 02:04:54.451157   62745 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1026 02:04:54.577817   62379 node_ready.go:53] node "embed-certs-767480" has status "Ready":"False"
	I1026 02:04:57.077857   62379 node_ready.go:49] node "embed-certs-767480" has status "Ready":"True"
	I1026 02:04:57.077883   62379 node_ready.go:38] duration metric: took 6.504218798s for node "embed-certs-767480" to be "Ready" ...
	I1026 02:04:57.077895   62379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:04:57.083124   62379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:57.088587   62379 pod_ready.go:93] pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace has status "Ready":"True"
	I1026 02:04:57.088608   62379 pod_ready.go:82] duration metric: took 5.45081ms for pod "coredns-7c65d6cfc9-cs6fv" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:57.088620   62379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:57.093071   62379 pod_ready.go:93] pod "etcd-embed-certs-767480" in "kube-system" namespace has status "Ready":"True"
	I1026 02:04:57.093091   62379 pod_ready.go:82] duration metric: took 4.464537ms for pod "etcd-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:57.093100   62379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:54.452507   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:54.455334   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:54.455660   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:54.455685   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:54.455911   62745 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 02:04:54.459769   62745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:04:54.471699   62745 kubeadm.go:883] updating cluster {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:04:54.471797   62745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 02:04:54.471843   62745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:54.517960   62745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 02:04:54.518050   62745 ssh_runner.go:195] Run: which lz4
	I1026 02:04:54.522001   62745 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:04:54.525626   62745 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:04:54.525652   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1026 02:04:55.993918   62745 crio.go:462] duration metric: took 1.471949666s to copy over tarball
	I1026 02:04:55.994015   62745 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:04:58.883868   62745 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.889820571s)
	I1026 02:04:58.883901   62745 crio.go:469] duration metric: took 2.88994785s to extract the tarball
	I1026 02:04:58.883911   62745 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:04:58.926928   62745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:58.960838   62745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 02:04:58.960869   62745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 02:04:58.960922   62745 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:04:58.960969   62745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:58.961032   62745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:58.961068   62745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:58.961103   62745 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:58.961007   62745 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:58.961048   62745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:58.961015   62745 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1026 02:04:58.962949   62745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:58.962965   62745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:58.962951   62745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:58.963006   62745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:58.962967   62745 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:58.963034   62745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:58.962992   62745 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1026 02:04:58.963042   62745 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:04:54.414940   62203 main.go:141] libmachine: (no-preload-093148) Waiting to get IP...
	I1026 02:04:54.415881   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:54.416385   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:54.416466   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:54.416359   63864 retry.go:31] will retry after 230.985607ms: waiting for machine to come up
	I1026 02:04:54.648812   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:54.649269   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:54.649294   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:54.649232   63864 retry.go:31] will retry after 371.284349ms: waiting for machine to come up
	I1026 02:04:55.021786   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:55.022387   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:55.022421   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:55.022323   63864 retry.go:31] will retry after 387.432343ms: waiting for machine to come up
	I1026 02:04:55.411634   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:55.412193   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:55.412214   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:55.412147   63864 retry.go:31] will retry after 571.160869ms: waiting for machine to come up
	I1026 02:04:55.984909   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:55.985522   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:55.985545   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:55.985399   63864 retry.go:31] will retry after 603.579461ms: waiting for machine to come up
	I1026 02:04:56.590145   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:56.590662   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:56.590695   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:56.590622   63864 retry.go:31] will retry after 815.343751ms: waiting for machine to come up
	I1026 02:04:57.407709   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:57.408196   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:57.408241   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:57.408155   63864 retry.go:31] will retry after 751.850038ms: waiting for machine to come up
	I1026 02:04:58.161451   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:58.162079   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:58.162103   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:58.162022   63864 retry.go:31] will retry after 1.402551703s: waiting for machine to come up
	I1026 02:04:59.102177   62379 pod_ready.go:103] pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:01.677663   62379 pod_ready.go:103] pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:02.602146   62379 pod_ready.go:93] pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:02.602177   62379 pod_ready.go:82] duration metric: took 5.509068101s for pod "kube-apiserver-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.602194   62379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.608928   62379 pod_ready.go:93] pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:02.608953   62379 pod_ready.go:82] duration metric: took 6.749046ms for pod "kube-controller-manager-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.608968   62379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlwh5" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.615519   62379 pod_ready.go:93] pod "kube-proxy-nlwh5" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:02.615542   62379 pod_ready.go:82] duration metric: took 6.567008ms for pod "kube-proxy-nlwh5" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.615552   62379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.623093   62379 pod_ready.go:93] pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:02.623124   62379 pod_ready.go:82] duration metric: took 7.564886ms for pod "kube-scheduler-embed-certs-767480" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:02.623138   62379 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace to be "Ready" ...
	I1026 02:04:59.214479   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.214983   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.217945   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.218962   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1026 02:04:59.227143   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.230137   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.231061   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.359793   62745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1026 02:04:59.359849   62745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.359906   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.359910   62745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1026 02:04:59.359941   62745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.359980   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.395980   62745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1026 02:04:59.396030   62745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.396050   62745 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1026 02:04:59.396066   62745 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1026 02:04:59.396082   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396092   62745 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1026 02:04:59.396095   62745 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.396138   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396168   62745 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1026 02:04:59.396138   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396197   62745 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.396233   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.399339   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.399382   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.399463   62745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1026 02:04:59.399494   62745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.399530   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.406867   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.406919   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.406954   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.407187   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.512171   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.512185   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.512171   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.524252   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.524253   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.534571   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.534655   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.638041   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.643736   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.678053   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.678117   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.678266   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.703981   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.703981   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.789073   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1026 02:04:59.789147   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1026 02:04:59.813698   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.813728   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1026 02:04:59.813746   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1026 02:04:59.822258   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1026 02:04:59.828510   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1026 02:04:59.852264   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1026 02:05:00.143182   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:00.285257   62745 cache_images.go:92] duration metric: took 1.324368126s to LoadCachedImages
	W1026 02:05:00.285350   62745 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1026 02:05:00.285367   62745 kubeadm.go:934] updating node { 192.168.39.33 8443 v1.20.0 crio true true} ...
	I1026 02:05:00.285486   62745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-385716 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:05:00.285571   62745 ssh_runner.go:195] Run: crio config
	I1026 02:05:00.335736   62745 cni.go:84] Creating CNI manager for ""
	I1026 02:05:00.335764   62745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:05:00.335779   62745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:05:00.335797   62745 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-385716 NodeName:old-k8s-version-385716 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 02:05:00.335929   62745 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-385716"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:05:00.335988   62745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1026 02:05:00.346410   62745 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:05:00.346490   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:05:00.356388   62745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1026 02:05:00.373587   62745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:05:00.389716   62745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1026 02:05:00.406194   62745 ssh_runner.go:195] Run: grep 192.168.39.33	control-plane.minikube.internal$ /etc/hosts
	I1026 02:05:00.409900   62745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:05:00.421876   62745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:05:00.547228   62745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:05:00.563383   62745 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716 for IP: 192.168.39.33
	I1026 02:05:00.563409   62745 certs.go:194] generating shared ca certs ...
	I1026 02:05:00.563429   62745 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:00.563601   62745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:05:00.563657   62745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:05:00.563670   62745 certs.go:256] generating profile certs ...
	I1026 02:05:00.563798   62745 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.key
	I1026 02:05:00.629961   62745 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891
	I1026 02:05:00.630065   62745 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key
	I1026 02:05:00.630247   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:05:00.630291   62745 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:05:00.630311   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:05:00.630345   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:05:00.630381   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:05:00.630418   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:05:00.630475   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:05:00.631357   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:05:00.675285   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:05:00.714335   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:05:00.755344   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:05:00.787528   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 02:05:00.826139   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:05:00.851102   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:05:00.875425   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 02:05:00.900226   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:05:00.931632   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:05:00.959203   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:05:00.983986   62745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:05:01.000930   62745 ssh_runner.go:195] Run: openssl version
	I1026 02:05:01.007168   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:05:01.018252   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.022960   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.023022   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.028915   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:05:01.039800   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:05:01.050925   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.055754   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.055809   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.061382   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:05:01.071996   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:05:01.082621   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.087522   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.087608   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.093377   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:05:01.104331   62745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:05:01.109313   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:05:01.115603   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:05:01.122183   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:05:01.128868   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:05:01.135327   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:05:01.142955   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:05:01.151353   62745 kubeadm.go:392] StartCluster: {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:05:01.151447   62745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:05:01.151537   62745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:01.200766   62745 cri.go:89] found id: ""
	I1026 02:05:01.200845   62745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:05:01.211671   62745 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:05:01.211697   62745 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:05:01.211760   62745 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:05:01.222114   62745 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:05:01.223151   62745 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:05:01.223791   62745 kubeconfig.go:62] /home/jenkins/minikube-integration/19868-8680/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-385716" cluster setting kubeconfig missing "old-k8s-version-385716" context setting]
	I1026 02:05:01.224728   62745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:01.289209   62745 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:05:01.300342   62745 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.33
	I1026 02:05:01.300385   62745 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:05:01.300400   62745 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:05:01.300462   62745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:01.340462   62745 cri.go:89] found id: ""
	I1026 02:05:01.340538   62745 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:05:01.357940   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:05:01.367863   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:05:01.367885   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:05:01.367940   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:05:01.378121   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:05:01.378189   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:05:01.388445   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:05:01.398096   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:05:01.398170   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:05:01.407914   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:05:01.418110   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:05:01.418177   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:05:01.428678   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:05:01.438749   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:05:01.438850   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:05:01.450759   62745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:05:01.461160   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:01.597114   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.376008   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.620455   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.753408   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.827566   62745 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:05:02.827662   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:03.327825   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:03.828494   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:04:59.566687   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:04:59.567306   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:04:59.567340   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:04:59.567233   63864 retry.go:31] will retry after 1.619387442s: waiting for machine to come up
	I1026 02:05:01.188970   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:01.189510   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:05:01.189542   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:05:01.189448   63864 retry.go:31] will retry after 1.868396931s: waiting for machine to come up
	I1026 02:05:03.058974   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:03.059589   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:05:03.059620   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:05:03.059525   63864 retry.go:31] will retry after 2.393934887s: waiting for machine to come up
	I1026 02:05:04.629923   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:07.129797   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:04.328718   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:04.828766   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:05.328706   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:05.827729   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:06.327930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:06.828400   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:07.327815   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:07.827702   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:08.327796   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:08.828718   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:05.454808   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:05.455322   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:05:05.455434   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:05:05.455370   63864 retry.go:31] will retry after 2.36672174s: waiting for machine to come up
	I1026 02:05:07.824961   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:07.825462   62203 main.go:141] libmachine: (no-preload-093148) DBG | unable to find current IP address of domain no-preload-093148 in network mk-no-preload-093148
	I1026 02:05:07.825489   62203 main.go:141] libmachine: (no-preload-093148) DBG | I1026 02:05:07.825405   63864 retry.go:31] will retry after 4.137233992s: waiting for machine to come up
	I1026 02:05:09.629734   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:11.630697   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:09.327723   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:09.828684   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:10.327773   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:10.828577   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:11.328614   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:11.828477   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:12.327916   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:12.828195   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:13.327743   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:13.827732   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:11.967078   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:11.967752   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has current primary IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:11.967772   62203 main.go:141] libmachine: (no-preload-093148) Found IP for machine: 192.168.50.9
	I1026 02:05:11.967784   62203 main.go:141] libmachine: (no-preload-093148) Reserving static IP address...
	I1026 02:05:11.968256   62203 main.go:141] libmachine: (no-preload-093148) Reserved static IP address: 192.168.50.9
	I1026 02:05:11.968294   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "no-preload-093148", mac: "52:54:00:bc:d1:f6", ip: "192.168.50.9"} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:11.968316   62203 main.go:141] libmachine: (no-preload-093148) Waiting for SSH to be available...
	I1026 02:05:11.968343   62203 main.go:141] libmachine: (no-preload-093148) DBG | skip adding static IP to network mk-no-preload-093148 - found existing host DHCP lease matching {name: "no-preload-093148", mac: "52:54:00:bc:d1:f6", ip: "192.168.50.9"}
	I1026 02:05:11.968363   62203 main.go:141] libmachine: (no-preload-093148) DBG | Getting to WaitForSSH function...
	I1026 02:05:11.970199   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:11.970478   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:11.970510   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:11.970697   62203 main.go:141] libmachine: (no-preload-093148) DBG | Using SSH client type: external
	I1026 02:05:11.970727   62203 main.go:141] libmachine: (no-preload-093148) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa (-rw-------)
	I1026 02:05:11.970777   62203 main.go:141] libmachine: (no-preload-093148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:05:11.970798   62203 main.go:141] libmachine: (no-preload-093148) DBG | About to run SSH command:
	I1026 02:05:11.970808   62203 main.go:141] libmachine: (no-preload-093148) DBG | exit 0
	I1026 02:05:12.093573   62203 main.go:141] libmachine: (no-preload-093148) DBG | SSH cmd err, output: <nil>: 
	I1026 02:05:12.093974   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetConfigRaw
	I1026 02:05:12.094670   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetIP
	I1026 02:05:12.097446   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.097871   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.097904   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.098240   62203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/config.json ...
	I1026 02:05:12.098526   62203 machine.go:93] provisionDockerMachine start ...
	I1026 02:05:12.098549   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:12.098817   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.101396   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.101752   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.101782   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.101992   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.102156   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.102351   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.102533   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.102685   62203 main.go:141] libmachine: Using SSH client type: native
	I1026 02:05:12.102881   62203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I1026 02:05:12.102891   62203 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:05:12.210086   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:05:12.210115   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetMachineName
	I1026 02:05:12.210360   62203 buildroot.go:166] provisioning hostname "no-preload-093148"
	I1026 02:05:12.210401   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetMachineName
	I1026 02:05:12.210613   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.213279   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.213634   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.213664   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.213781   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.213957   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.214144   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.214284   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.214479   62203 main.go:141] libmachine: Using SSH client type: native
	I1026 02:05:12.214684   62203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I1026 02:05:12.214697   62203 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-093148 && echo "no-preload-093148" | sudo tee /etc/hostname
	I1026 02:05:12.331798   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093148
	
	I1026 02:05:12.331821   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.335025   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.335441   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.335465   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.335755   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.335954   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.336139   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.336319   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.336494   62203 main.go:141] libmachine: Using SSH client type: native
	I1026 02:05:12.336733   62203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I1026 02:05:12.336762   62203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-093148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-093148/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-093148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:05:12.446228   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:05:12.446261   62203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:05:12.446287   62203 buildroot.go:174] setting up certificates
	I1026 02:05:12.446295   62203 provision.go:84] configureAuth start
	I1026 02:05:12.446306   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetMachineName
	I1026 02:05:12.446548   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetIP
	I1026 02:05:12.449366   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.449799   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.449830   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.450007   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.452345   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.452718   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.452736   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.452976   62203 provision.go:143] copyHostCerts
	I1026 02:05:12.453054   62203 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:05:12.453069   62203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:05:12.453132   62203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:05:12.453264   62203 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:05:12.453274   62203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:05:12.453309   62203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:05:12.453387   62203 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:05:12.453397   62203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:05:12.453439   62203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:05:12.453518   62203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.no-preload-093148 san=[127.0.0.1 192.168.50.9 localhost minikube no-preload-093148]
	I1026 02:05:12.551305   62203 provision.go:177] copyRemoteCerts
	I1026 02:05:12.551361   62203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:05:12.551387   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.554226   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.554544   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.554577   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.554757   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.554955   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.555072   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.555218   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:12.635313   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:05:12.658150   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 02:05:12.680696   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:05:12.703114   62203 provision.go:87] duration metric: took 256.807109ms to configureAuth
	I1026 02:05:12.703140   62203 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:05:12.703308   62203 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:05:12.703382   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.705852   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.706142   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.706170   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.706383   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.706558   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.706730   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.706867   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.707028   62203 main.go:141] libmachine: Using SSH client type: native
	I1026 02:05:12.707192   62203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I1026 02:05:12.707205   62203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:05:12.925979   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:05:12.926009   62203 machine.go:96] duration metric: took 827.468153ms to provisionDockerMachine
	I1026 02:05:12.926024   62203 start.go:293] postStartSetup for "no-preload-093148" (driver="kvm2")
	I1026 02:05:12.926035   62203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:05:12.926051   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:12.926379   62203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:05:12.926420   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:12.929065   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.929351   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:12.929392   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:12.929569   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:12.929767   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:12.929907   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:12.930053   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:13.012727   62203 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:05:13.017038   62203 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:05:13.017067   62203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:05:13.017151   62203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:05:13.017257   62203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:05:13.017376   62203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:05:13.026850   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:05:13.050065   62203 start.go:296] duration metric: took 124.025295ms for postStartSetup
	I1026 02:05:13.050126   62203 fix.go:56] duration metric: took 19.956234151s for fixHost
	I1026 02:05:13.050153   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:13.052709   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.053072   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:13.053102   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.053279   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:13.053410   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:13.053600   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:13.053699   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:13.053837   62203 main.go:141] libmachine: Using SSH client type: native
	I1026 02:05:13.054011   62203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I1026 02:05:13.054021   62203 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:05:13.154291   62203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729908313.128642740
	
	I1026 02:05:13.154316   62203 fix.go:216] guest clock: 1729908313.128642740
	I1026 02:05:13.154327   62203 fix.go:229] Guest: 2024-10-26 02:05:13.12864274 +0000 UTC Remote: 2024-10-26 02:05:13.050132193 +0000 UTC m=+333.765074525 (delta=78.510547ms)
	I1026 02:05:13.154353   62203 fix.go:200] guest clock delta is within tolerance: 78.510547ms
	I1026 02:05:13.154358   62203 start.go:83] releasing machines lock for "no-preload-093148", held for 20.06050596s
	I1026 02:05:13.154400   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:13.154622   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetIP
	I1026 02:05:13.157612   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.157990   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:13.158024   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.158170   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:13.158624   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:13.158816   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:13.158896   62203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:05:13.158956   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:13.159032   62203 ssh_runner.go:195] Run: cat /version.json
	I1026 02:05:13.159057   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:13.161617   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.161896   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.162035   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:13.162070   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.162186   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:13.162355   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:13.162366   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:13.162391   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:13.162505   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:13.162595   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:13.162739   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:13.162739   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:13.162861   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:13.163027   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:13.266758   62203 ssh_runner.go:195] Run: systemctl --version
	I1026 02:05:13.273143   62203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:05:13.410778   62203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:05:13.417387   62203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:05:13.417485   62203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:05:13.432539   62203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:05:13.432567   62203 start.go:495] detecting cgroup driver to use...
	I1026 02:05:13.432643   62203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:05:13.449249   62203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:05:13.463506   62203 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:05:13.463576   62203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:05:13.477552   62203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:05:13.491811   62203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:05:13.610915   62203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:05:13.777600   62203 docker.go:233] disabling docker service ...
	I1026 02:05:13.777663   62203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:05:13.792870   62203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:05:13.807201   62203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:05:13.937895   62203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:05:14.053729   62203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:05:14.067372   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:05:14.085710   62203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:05:14.085767   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.096536   62203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:05:14.096592   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.107585   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.118257   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.129229   62203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:05:14.140612   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.151416   62203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.168334   62203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:05:14.179159   62203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:05:14.189066   62203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:05:14.189157   62203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:05:14.203314   62203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:05:14.212843   62203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:05:14.323667   62203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:05:14.435814   62203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:05:14.435876   62203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:05:14.440385   62203 start.go:563] Will wait 60s for crictl version
	I1026 02:05:14.440436   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:14.444006   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:05:14.487215   62203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:05:14.487292   62203 ssh_runner.go:195] Run: crio --version
	I1026 02:05:14.517880   62203 ssh_runner.go:195] Run: crio --version
	I1026 02:05:14.548903   62203 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:05:14.129921   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:16.629778   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:14.327816   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:14.828510   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:15.328470   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:15.827751   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:16.328146   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:16.828497   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:17.328639   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:17.827804   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:18.328601   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:18.827909   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:14.550267   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetIP
	I1026 02:05:14.553309   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:14.553766   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:14.553793   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:14.554022   62203 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1026 02:05:14.557999   62203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:05:14.571572   62203 kubeadm.go:883] updating cluster {Name:no-preload-093148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-093148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:05:14.571705   62203 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:05:14.571746   62203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:05:14.610665   62203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:05:14.610690   62203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 02:05:14.610733   62203 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:14.610761   62203 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:14.610803   62203 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:14.610846   62203 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:14.610893   62203 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:14.610961   62203 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:14.610955   62203 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:14.610974   62203 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1026 02:05:14.612270   62203 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:14.612275   62203 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:14.612280   62203 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:14.612293   62203 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:14.612272   62203 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:14.612271   62203 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:14.612388   62203 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:14.612416   62203 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1026 02:05:14.826066   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:14.840188   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1026 02:05:14.850239   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:14.852255   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:14.852307   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:14.858409   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:14.898372   62203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1026 02:05:14.898422   62203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:14.898469   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:14.920262   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:15.084420   62203 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1026 02:05:15.084468   62203 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:15.084470   62203 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1026 02:05:15.084503   62203 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:15.084521   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:15.084536   62203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1026 02:05:15.084562   62203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:15.084579   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:15.084603   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:15.084605   62203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1026 02:05:15.084627   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:15.084646   62203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:15.084683   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:15.084691   62203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1026 02:05:15.084720   62203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:15.084763   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:15.121848   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:15.121925   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:15.121960   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:15.121983   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:15.122026   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:15.122077   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:15.206332   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:15.255621   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1026 02:05:15.255635   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:15.260271   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:15.260286   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:15.260299   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:15.264078   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1026 02:05:15.384471   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1026 02:05:15.384586   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1026 02:05:15.384601   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1026 02:05:15.392191   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1026 02:05:15.392257   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1026 02:05:15.392354   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1026 02:05:15.392412   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1026 02:05:15.392512   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1026 02:05:15.398117   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1026 02:05:15.398138   62203 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1026 02:05:15.398205   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1026 02:05:15.452762   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1026 02:05:15.452876   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1026 02:05:15.482134   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1026 02:05:15.482244   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1026 02:05:15.500408   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1026 02:05:15.500419   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1026 02:05:15.500519   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1026 02:05:15.500716   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1026 02:05:15.500805   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1026 02:05:15.717823   62203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:17.428535   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.030298165s)
	I1026 02:05:17.428571   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1026 02:05:17.428577   62203 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.975676784s)
	I1026 02:05:17.428602   62203 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.946340666s)
	I1026 02:05:17.428604   62203 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1026 02:05:17.428619   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1026 02:05:17.428622   62203 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.928086671s)
	I1026 02:05:17.428624   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1026 02:05:17.428633   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1026 02:05:17.428649   62203 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (1.927829133s)
	I1026 02:05:17.428660   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1026 02:05:17.428683   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1026 02:05:17.428693   62203 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.710841934s)
	I1026 02:05:17.428751   62203 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1026 02:05:17.428787   62203 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:17.428828   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:05:17.432977   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:18.629859   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:20.630937   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:23.129988   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:19.327760   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:19.828058   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:20.328487   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:20.827836   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:21.328618   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:21.828692   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:22.328180   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:22.827698   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:23.328474   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:23.828407   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:21.457637   62203 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.024629744s)
	I1026 02:05:21.457727   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:21.457645   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.028931589s)
	I1026 02:05:21.457778   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1026 02:05:21.457803   62203 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1026 02:05:21.457852   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1026 02:05:23.356595   62203 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.898844034s)
	I1026 02:05:23.356646   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.898770365s)
	I1026 02:05:23.356669   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1026 02:05:23.356693   62203 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1026 02:05:23.356705   62203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:23.356738   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1026 02:05:23.393140   62203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 02:05:23.393248   62203 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 02:05:25.630371   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:28.129379   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:24.327803   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:24.828131   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:25.328089   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:25.828080   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:26.327838   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:26.828750   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:27.328352   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:27.828164   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:28.328168   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:28.828627   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:25.325185   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.968425632s)
	I1026 02:05:25.325215   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1026 02:05:25.325232   62203 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1026 02:05:25.325245   62203 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.931971413s)
	I1026 02:05:25.325268   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1026 02:05:25.325278   62203 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1026 02:05:27.169994   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.84470622s)
	I1026 02:05:27.170020   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1026 02:05:27.170045   62203 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1026 02:05:27.170094   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1026 02:05:29.044171   62203 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.874054651s)
	I1026 02:05:29.044211   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1026 02:05:29.044234   62203 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 02:05:29.044289   62203 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 02:05:30.130380   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:32.629487   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:29.328775   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:29.828214   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:30.328277   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:30.828549   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:31.328482   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:31.828402   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:32.327877   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:32.828764   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.328031   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.828373   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:29.903542   62203 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 02:05:29.903586   62203 cache_images.go:123] Successfully loaded all cached images
	I1026 02:05:29.903593   62203 cache_images.go:92] duration metric: took 15.292891204s to LoadCachedImages
	I1026 02:05:29.903608   62203 kubeadm.go:934] updating node { 192.168.50.9 8443 v1.31.2 crio true true} ...
	I1026 02:05:29.903717   62203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-093148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-093148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:05:29.903783   62203 ssh_runner.go:195] Run: crio config
	I1026 02:05:29.957973   62203 cni.go:84] Creating CNI manager for ""
	I1026 02:05:29.957999   62203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:05:29.958011   62203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:05:29.958031   62203 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.9 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-093148 NodeName:no-preload-093148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:05:29.958148   62203 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-093148"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:05:29.958211   62203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:05:29.967904   62203 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:05:29.967973   62203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:05:29.977411   62203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1026 02:05:29.995012   62203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:05:30.012844   62203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I1026 02:05:30.030560   62203 ssh_runner.go:195] Run: grep 192.168.50.9	control-plane.minikube.internal$ /etc/hosts
	I1026 02:05:30.034486   62203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:05:30.046885   62203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:05:30.183564   62203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:05:30.201221   62203 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148 for IP: 192.168.50.9
	I1026 02:05:30.201244   62203 certs.go:194] generating shared ca certs ...
	I1026 02:05:30.201259   62203 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:30.201400   62203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:05:30.201482   62203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:05:30.201497   62203 certs.go:256] generating profile certs ...
	I1026 02:05:30.201578   62203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.key
	I1026 02:05:30.201673   62203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/apiserver.key.2f3587e9
	I1026 02:05:30.201724   62203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/proxy-client.key
	I1026 02:05:30.201875   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:05:30.202367   62203 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:05:30.202395   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:05:30.202437   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:05:30.202471   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:05:30.202504   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:05:30.202571   62203 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:05:30.204872   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:05:30.232007   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:05:30.264539   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:05:30.297664   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:05:30.324293   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 02:05:30.348934   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:05:30.377044   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:05:30.400929   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:05:30.424981   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:05:30.448163   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:05:30.471308   62203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:05:30.493973   62203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:05:30.509882   62203 ssh_runner.go:195] Run: openssl version
	I1026 02:05:30.515150   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:05:30.525670   62203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:05:30.529936   62203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:05:30.529987   62203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:05:30.535534   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:05:30.545823   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:05:30.555901   62203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:30.559955   62203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:30.560017   62203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:30.565210   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:05:30.575995   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:05:30.586836   62203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:05:30.591136   62203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:05:30.591190   62203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:05:30.596577   62203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:05:30.606900   62203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:05:30.611131   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:05:30.616817   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:05:30.622616   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:05:30.628443   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:05:30.634138   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:05:30.639673   62203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:05:30.645316   62203 kubeadm.go:392] StartCluster: {Name:no-preload-093148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-093148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:05:30.645461   62203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:05:30.645524   62203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:30.681205   62203 cri.go:89] found id: ""
	I1026 02:05:30.681270   62203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:05:30.690779   62203 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:05:30.690806   62203 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:05:30.690859   62203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:05:30.700153   62203 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:05:30.701093   62203 kubeconfig.go:125] found "no-preload-093148" server: "https://192.168.50.9:8443"
	I1026 02:05:30.703222   62203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:05:30.712823   62203 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.9
	I1026 02:05:30.712860   62203 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:05:30.712874   62203 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:05:30.712931   62203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:30.749807   62203 cri.go:89] found id: ""
	I1026 02:05:30.749882   62203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:05:30.767193   62203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:05:30.776825   62203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:05:30.776848   62203 kubeadm.go:157] found existing configuration files:
	
	I1026 02:05:30.776903   62203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:05:30.787697   62203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:05:30.787770   62203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:05:30.800701   62203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:05:30.809662   62203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:05:30.809734   62203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:05:30.818794   62203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:05:30.827359   62203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:05:30.827428   62203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:05:30.837718   62203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:05:30.846785   62203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:05:30.846839   62203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:05:30.856286   62203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:05:30.865766   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:30.972222   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:31.782882   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:31.992224   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:32.069761   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:32.164166   62203 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:05:32.164256   62203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:32.665120   62203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.164850   62203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.179980   62203 api_server.go:72] duration metric: took 1.015812093s to wait for apiserver process to appear ...
	I1026 02:05:33.180011   62203 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:05:33.180042   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:05:36.009206   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:05:36.009239   62203 api_server.go:103] status: https://192.168.50.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:05:36.009255   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:05:36.072137   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:05:36.072161   62203 api_server.go:103] status: https://192.168.50.9:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:05:36.180371   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:05:36.201826   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:05:36.201857   62203 api_server.go:103] status: https://192.168.50.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:05:36.680110   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:05:36.688193   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:05:36.688224   62203 api_server.go:103] status: https://192.168.50.9:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:05:37.180878   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:05:37.185168   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 200:
	ok
	I1026 02:05:37.191340   62203 api_server.go:141] control plane version: v1.31.2
	I1026 02:05:37.191364   62203 api_server.go:131] duration metric: took 4.011346091s to wait for apiserver health ...
	I1026 02:05:37.191371   62203 cni.go:84] Creating CNI manager for ""
	I1026 02:05:37.191377   62203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:05:37.193306   62203 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:05:34.630029   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:36.630563   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:37.194654   62203 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:05:37.204386   62203 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:05:37.221669   62203 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:05:37.230792   62203 system_pods.go:59] 8 kube-system pods found
	I1026 02:05:37.230826   62203 system_pods.go:61] "coredns-7c65d6cfc9-4bxg2" [6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 02:05:37.230834   62203 system_pods.go:61] "etcd-no-preload-093148" [fdbc9d71-98dc-4808-abdf-19d81b1a58a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 02:05:37.230845   62203 system_pods.go:61] "kube-apiserver-no-preload-093148" [b75bc2e9-71d6-4526-ba8e-bca2755ea9e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 02:05:37.230858   62203 system_pods.go:61] "kube-controller-manager-no-preload-093148" [4e415184-b1c5-452f-886f-ce654a2d82c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 02:05:37.230871   62203 system_pods.go:61] "kube-proxy-z7nrz" [f9041b89-8769-4652-8d39-0982091ffc7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 02:05:37.230882   62203 system_pods.go:61] "kube-scheduler-no-preload-093148" [a0a403d6-29bf-48a4-aee4-50e3dc2465b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 02:05:37.230893   62203 system_pods.go:61] "metrics-server-6867b74b74-kwrk2" [25c9f457-5112-4b5b-8a28-6cb290f5ebdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:05:37.230901   62203 system_pods.go:61] "storage-provisioner" [e7f5b94f-ba28-42f6-a8bf-1e7ab4248537] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 02:05:37.230907   62203 system_pods.go:74] duration metric: took 9.214512ms to wait for pod list to return data ...
	I1026 02:05:37.230916   62203 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:05:37.234087   62203 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:05:37.234108   62203 node_conditions.go:123] node cpu capacity is 2
	I1026 02:05:37.234118   62203 node_conditions.go:105] duration metric: took 3.198422ms to run NodePressure ...
	I1026 02:05:37.234137   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:37.540496   62203 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1026 02:05:37.544717   62203 kubeadm.go:739] kubelet initialised
	I1026 02:05:37.544741   62203 kubeadm.go:740] duration metric: took 4.218456ms waiting for restarted kubelet to initialise ...
	I1026 02:05:37.544749   62203 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:05:37.549223   62203 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:37.553525   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.553556   62203 pod_ready.go:82] duration metric: took 4.30875ms for pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:37.553565   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.553578   62203 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:37.557271   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "etcd-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.557292   62203 pod_ready.go:82] duration metric: took 3.706013ms for pod "etcd-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:37.557300   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "etcd-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.557306   62203 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:37.561924   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "kube-apiserver-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.561946   62203 pod_ready.go:82] duration metric: took 4.633393ms for pod "kube-apiserver-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:37.561955   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "kube-apiserver-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.561961   62203 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:37.625651   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.625679   62203 pod_ready.go:82] duration metric: took 63.708955ms for pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:37.625691   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:37.625700   62203 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z7nrz" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:38.025554   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "kube-proxy-z7nrz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.025581   62203 pod_ready.go:82] duration metric: took 399.872004ms for pod "kube-proxy-z7nrz" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:38.025594   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "kube-proxy-z7nrz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.025603   62203 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:38.425546   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "kube-scheduler-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.425576   62203 pod_ready.go:82] duration metric: took 399.96578ms for pod "kube-scheduler-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:38.425587   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "kube-scheduler-no-preload-093148" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.425597   62203 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:38.826227   62203 pod_ready.go:98] node "no-preload-093148" hosting pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.826257   62203 pod_ready.go:82] duration metric: took 400.648983ms for pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace to be "Ready" ...
	E1026 02:05:38.826269   62203 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-093148" hosting pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:38.826280   62203 pod_ready.go:39] duration metric: took 1.281521874s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:05:38.826304   62203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:05:38.838142   62203 ops.go:34] apiserver oom_adj: -16
	I1026 02:05:38.838178   62203 kubeadm.go:597] duration metric: took 8.147358859s to restartPrimaryControlPlane
	I1026 02:05:38.838190   62203 kubeadm.go:394] duration metric: took 8.192881001s to StartCluster
	I1026 02:05:38.838210   62203 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:38.838281   62203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:05:38.840172   62203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:38.840459   62203 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:05:38.840643   62203 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:05:38.840750   62203 addons.go:69] Setting storage-provisioner=true in profile "no-preload-093148"
	I1026 02:05:38.840768   62203 addons.go:234] Setting addon storage-provisioner=true in "no-preload-093148"
	W1026 02:05:38.840775   62203 addons.go:243] addon storage-provisioner should already be in state true
	I1026 02:05:38.840804   62203 host.go:66] Checking if "no-preload-093148" exists ...
	I1026 02:05:38.840825   62203 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:05:38.840877   62203 addons.go:69] Setting default-storageclass=true in profile "no-preload-093148"
	I1026 02:05:38.840889   62203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-093148"
	I1026 02:05:38.841225   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.841258   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.841258   62203 addons.go:69] Setting metrics-server=true in profile "no-preload-093148"
	I1026 02:05:38.841270   62203 addons.go:234] Setting addon metrics-server=true in "no-preload-093148"
	W1026 02:05:38.841277   62203 addons.go:243] addon metrics-server should already be in state true
	I1026 02:05:38.841301   62203 host.go:66] Checking if "no-preload-093148" exists ...
	I1026 02:05:38.841653   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.841669   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.841699   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.841708   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.842474   62203 out.go:177] * Verifying Kubernetes components...
	I1026 02:05:38.845801   62203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:05:38.858920   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I1026 02:05:38.859318   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41889
	I1026 02:05:38.859452   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.859872   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.860081   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.860099   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.860392   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.860416   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.860432   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.860658   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 02:05:38.860733   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.861295   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.861341   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.862265   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I1026 02:05:38.862738   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.863210   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.863229   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.864024   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.864456   62203 addons.go:234] Setting addon default-storageclass=true in "no-preload-093148"
	W1026 02:05:38.864474   62203 addons.go:243] addon default-storageclass should already be in state true
	I1026 02:05:38.864504   62203 host.go:66] Checking if "no-preload-093148" exists ...
	I1026 02:05:38.864602   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.864647   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.864794   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.864829   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.883187   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I1026 02:05:38.885953   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.886464   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.886493   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.886786   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.886974   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 02:05:38.888637   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:38.890492   62203 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:38.891861   62203 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:05:38.891880   62203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:05:38.891899   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:38.895186   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.895625   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:38.895651   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.895898   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:38.896128   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:38.896264   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:38.896433   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:38.905526   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I1026 02:05:38.906198   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.906559   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35207
	I1026 02:05:38.907109   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.907178   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.907201   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.907552   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.907659   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.907680   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.908146   62203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:05:38.908196   62203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:05:38.908349   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.908553   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 02:05:38.910877   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:38.912460   62203 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 02:05:34.328417   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:34.827883   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:35.328611   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:35.828369   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:36.328158   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:36.828404   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:37.327714   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:37.828183   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:38.328432   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:38.828619   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:38.913530   62203 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 02:05:38.913544   62203 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 02:05:38.913559   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:38.916376   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.916737   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:38.916755   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.916926   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:38.917071   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:38.917175   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:38.917270   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:38.932245   62203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I1026 02:05:38.932895   62203 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:05:38.933467   62203 main.go:141] libmachine: Using API Version  1
	I1026 02:05:38.933491   62203 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:05:38.933903   62203 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:05:38.934074   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 02:05:38.935632   62203 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 02:05:38.935864   62203 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:05:38.935885   62203 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:05:38.935904   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 02:05:38.938976   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.939324   62203 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 03:05:04 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 02:05:38.939349   62203 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 02:05:38.939595   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 02:05:38.939754   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 02:05:38.939901   62203 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 02:05:38.940028   62203 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 02:05:39.051633   62203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:05:39.069765   62203 node_ready.go:35] waiting up to 6m0s for node "no-preload-093148" to be "Ready" ...
	I1026 02:05:39.133230   62203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:05:39.217558   62203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:05:39.239072   62203 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 02:05:39.239097   62203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 02:05:39.261019   62203 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 02:05:39.261045   62203 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 02:05:39.324127   62203 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:05:39.324158   62203 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 02:05:39.395523   62203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:05:39.575767   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:39.575798   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:39.576158   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:39.576176   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:39.576186   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:39.576189   62203 main.go:141] libmachine: (no-preload-093148) DBG | Closing plugin on server side
	I1026 02:05:39.576195   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:39.576452   62203 main.go:141] libmachine: (no-preload-093148) DBG | Closing plugin on server side
	I1026 02:05:39.576462   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:39.576475   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:39.586596   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:39.586614   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:39.586861   62203 main.go:141] libmachine: (no-preload-093148) DBG | Closing plugin on server side
	I1026 02:05:39.586886   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:39.586896   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:40.404461   62203 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18686885s)
	I1026 02:05:40.404510   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:40.404522   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:40.404601   62203 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.009042858s)
	I1026 02:05:40.404642   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:40.404655   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:40.404804   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:40.404820   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:40.404832   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:40.404840   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:40.404963   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:40.404976   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:40.404997   62203 main.go:141] libmachine: Making call to close driver server
	I1026 02:05:40.405006   62203 main.go:141] libmachine: (no-preload-093148) Calling .Close
	I1026 02:05:40.405163   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:40.405180   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:40.405194   62203 main.go:141] libmachine: (no-preload-093148) DBG | Closing plugin on server side
	I1026 02:05:40.405215   62203 main.go:141] libmachine: (no-preload-093148) DBG | Closing plugin on server side
	I1026 02:05:40.405227   62203 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:05:40.405233   62203 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:05:40.405241   62203 addons.go:475] Verifying addon metrics-server=true in "no-preload-093148"
	I1026 02:05:40.407426   62203 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1026 02:05:39.130696   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:41.628794   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:39.328464   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:39.828733   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:40.328692   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:40.827978   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:41.328589   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:41.828084   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:42.327947   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:42.827814   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:43.328619   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:43.827779   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:40.408679   62203 addons.go:510] duration metric: took 1.568044336s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1026 02:05:41.072941   62203 node_ready.go:53] node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:43.074415   62203 node_ready.go:53] node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:43.629001   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:46.128841   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:48.129705   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:44.328770   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:44.828429   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:45.328402   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:45.828561   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:46.328733   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:46.828478   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:47.328066   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:47.828102   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:48.327971   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:48.828607   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:45.573975   62203 node_ready.go:53] node "no-preload-093148" has status "Ready":"False"
	I1026 02:05:47.077937   62203 node_ready.go:49] node "no-preload-093148" has status "Ready":"True"
	I1026 02:05:47.077962   62203 node_ready.go:38] duration metric: took 8.008160521s for node "no-preload-093148" to be "Ready" ...
	I1026 02:05:47.077974   62203 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:05:47.087707   62203 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.099043   62203 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:47.099069   62203 pod_ready.go:82] duration metric: took 11.32965ms for pod "coredns-7c65d6cfc9-4bxg2" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.099081   62203 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.106205   62203 pod_ready.go:93] pod "etcd-no-preload-093148" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:47.106229   62203 pod_ready.go:82] duration metric: took 7.140136ms for pod "etcd-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.106237   62203 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.612072   62203 pod_ready.go:93] pod "kube-apiserver-no-preload-093148" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:47.612092   62203 pod_ready.go:82] duration metric: took 505.849617ms for pod "kube-apiserver-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.612102   62203 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.616260   62203 pod_ready.go:93] pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:47.616277   62203 pod_ready.go:82] duration metric: took 4.169951ms for pod "kube-controller-manager-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.616286   62203 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z7nrz" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.874009   62203 pod_ready.go:93] pod "kube-proxy-z7nrz" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:47.874031   62203 pod_ready.go:82] duration metric: took 257.739432ms for pod "kube-proxy-z7nrz" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:47.874040   62203 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:48.274284   62203 pod_ready.go:93] pod "kube-scheduler-no-preload-093148" in "kube-system" namespace has status "Ready":"True"
	I1026 02:05:48.274312   62203 pod_ready.go:82] duration metric: took 400.264678ms for pod "kube-scheduler-no-preload-093148" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:48.274330   62203 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace to be "Ready" ...
	I1026 02:05:50.628633   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:52.628798   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:49.328568   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:49.827742   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:50.328650   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:50.828376   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:51.328489   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:51.827803   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:52.328543   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:52.828194   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:53.327741   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:53.828510   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:50.279993   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:52.779736   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:54.629107   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:56.629594   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:54.328518   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:54.828001   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:55.328146   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:55.828717   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:56.327938   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:56.828723   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:57.328164   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:57.827948   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:58.328295   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:58.828771   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:54.780821   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:57.281033   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:59.281647   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:58.630038   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:01.130007   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:03.130826   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:05:59.328113   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:59.828023   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:00.327856   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:00.828227   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:01.328318   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:01.828377   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:02.328413   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:02.828408   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:02.828482   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:02.865253   62745 cri.go:89] found id: ""
	I1026 02:06:02.865282   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.865292   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:02.865301   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:02.865365   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:02.897413   62745 cri.go:89] found id: ""
	I1026 02:06:02.897455   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.897466   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:02.897473   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:02.897537   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:02.934081   62745 cri.go:89] found id: ""
	I1026 02:06:02.934104   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.934111   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:02.934117   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:02.934168   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:02.965275   62745 cri.go:89] found id: ""
	I1026 02:06:02.965305   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.965316   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:02.965325   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:02.965391   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:02.997817   62745 cri.go:89] found id: ""
	I1026 02:06:02.997847   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.997854   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:02.997861   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:02.997930   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:03.029105   62745 cri.go:89] found id: ""
	I1026 02:06:03.029137   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.029148   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:03.029156   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:03.029214   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:03.061064   62745 cri.go:89] found id: ""
	I1026 02:06:03.061092   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.061103   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:03.061114   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:03.061177   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:03.095111   62745 cri.go:89] found id: ""
	I1026 02:06:03.095154   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.095164   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:03.095184   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:03.095201   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:03.148013   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:03.148044   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:03.160911   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:03.160948   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:03.282690   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:03.282709   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:03.282720   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:03.356710   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:03.356753   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:01.780070   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:03.781126   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:05.629526   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:07.630011   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:05.894053   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:05.906753   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:05.906825   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:05.939843   62745 cri.go:89] found id: ""
	I1026 02:06:05.939893   62745 logs.go:282] 0 containers: []
	W1026 02:06:05.939901   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:05.939914   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:05.939962   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:05.971681   62745 cri.go:89] found id: ""
	I1026 02:06:05.971711   62745 logs.go:282] 0 containers: []
	W1026 02:06:05.971724   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:05.971730   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:05.971777   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:06.023889   62745 cri.go:89] found id: ""
	I1026 02:06:06.023923   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.023934   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:06.023943   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:06.023992   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:06.060326   62745 cri.go:89] found id: ""
	I1026 02:06:06.060356   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.060368   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:06.060375   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:06.060437   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:06.093213   62745 cri.go:89] found id: ""
	I1026 02:06:06.093243   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.093259   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:06.093267   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:06.093331   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:06.125005   62745 cri.go:89] found id: ""
	I1026 02:06:06.125032   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.125042   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:06.125049   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:06.125110   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:06.165744   62745 cri.go:89] found id: ""
	I1026 02:06:06.165771   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.165786   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:06.165795   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:06.165858   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:06.198223   62745 cri.go:89] found id: ""
	I1026 02:06:06.198249   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.198258   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:06.198265   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:06.198275   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:06.247162   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:06.247193   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:06.259963   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:06.259986   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:06.329743   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:06.329770   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:06.329787   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:06.402917   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:06.402953   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:08.941593   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:08.954121   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:08.954182   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:08.986088   62745 cri.go:89] found id: ""
	I1026 02:06:08.986115   62745 logs.go:282] 0 containers: []
	W1026 02:06:08.986126   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:08.986133   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:08.986192   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:09.017861   62745 cri.go:89] found id: ""
	I1026 02:06:09.017888   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.017896   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:09.017901   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:09.017948   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:09.050015   62745 cri.go:89] found id: ""
	I1026 02:06:09.050038   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.050046   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:09.050051   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:09.050096   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:09.081336   62745 cri.go:89] found id: ""
	I1026 02:06:09.081359   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.081366   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:09.081371   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:09.081446   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:09.113330   62745 cri.go:89] found id: ""
	I1026 02:06:09.113364   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.113376   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:09.113384   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:09.113468   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:09.146319   62745 cri.go:89] found id: ""
	I1026 02:06:09.146347   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.146358   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:09.146366   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:09.146425   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:06.281623   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:08.780278   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:10.129194   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:12.130065   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:09.177827   62745 cri.go:89] found id: ""
	I1026 02:06:09.177854   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.177866   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:09.177874   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:09.177933   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:09.211351   62745 cri.go:89] found id: ""
	I1026 02:06:09.211389   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.211400   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:09.211411   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:09.211425   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:09.283433   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:09.283459   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:09.283474   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:09.361349   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:09.361383   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:09.397461   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:09.397490   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:09.447443   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:09.447474   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:11.961583   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:11.975577   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:11.975638   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:12.011335   62745 cri.go:89] found id: ""
	I1026 02:06:12.011363   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.011372   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:12.011377   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:12.011432   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:12.048024   62745 cri.go:89] found id: ""
	I1026 02:06:12.048048   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.048056   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:12.048062   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:12.048113   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:12.080372   62745 cri.go:89] found id: ""
	I1026 02:06:12.080394   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.080401   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:12.080407   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:12.080456   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:12.112306   62745 cri.go:89] found id: ""
	I1026 02:06:12.112341   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.112352   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:12.112360   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:12.112424   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:12.146551   62745 cri.go:89] found id: ""
	I1026 02:06:12.146578   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.146588   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:12.146595   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:12.146652   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:12.178248   62745 cri.go:89] found id: ""
	I1026 02:06:12.178277   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.178286   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:12.178291   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:12.178348   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:12.210980   62745 cri.go:89] found id: ""
	I1026 02:06:12.211003   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.211010   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:12.211016   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:12.211067   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:12.244863   62745 cri.go:89] found id: ""
	I1026 02:06:12.244890   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.244901   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:12.244910   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:12.244929   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:12.257397   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:12.257434   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:12.326641   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:12.326670   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:12.326682   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:12.400300   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:12.400343   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:12.456354   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:12.456389   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:10.781114   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:13.280099   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:14.629819   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:16.629912   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:15.017291   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:15.031144   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:15.031217   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:15.064159   62745 cri.go:89] found id: ""
	I1026 02:06:15.064189   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.064199   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:15.064206   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:15.064268   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:15.096879   62745 cri.go:89] found id: ""
	I1026 02:06:15.096910   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.096917   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:15.096924   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:15.096986   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:15.131602   62745 cri.go:89] found id: ""
	I1026 02:06:15.131623   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.131630   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:15.131636   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:15.131695   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:15.165190   62745 cri.go:89] found id: ""
	I1026 02:06:15.165216   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.165224   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:15.165230   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:15.165289   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:15.197064   62745 cri.go:89] found id: ""
	I1026 02:06:15.197092   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.197100   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:15.197106   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:15.197153   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:15.233806   62745 cri.go:89] found id: ""
	I1026 02:06:15.233836   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.233845   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:15.233852   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:15.233911   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:15.270313   62745 cri.go:89] found id: ""
	I1026 02:06:15.270338   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.270347   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:15.270355   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:15.270414   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:15.303312   62745 cri.go:89] found id: ""
	I1026 02:06:15.303341   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.303351   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:15.303361   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:15.303374   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:15.355400   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:15.355434   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:15.368325   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:15.368356   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:15.444522   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:15.444548   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:15.444560   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:15.522243   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:15.522278   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:18.064129   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:18.076361   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:18.076440   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:18.107859   62745 cri.go:89] found id: ""
	I1026 02:06:18.107894   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.107905   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:18.107914   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:18.107979   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:18.142326   62745 cri.go:89] found id: ""
	I1026 02:06:18.142353   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.142362   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:18.142370   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:18.142433   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:18.182660   62745 cri.go:89] found id: ""
	I1026 02:06:18.182700   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.182710   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:18.182717   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:18.182783   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:18.225675   62745 cri.go:89] found id: ""
	I1026 02:06:18.225702   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.225713   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:18.225721   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:18.225782   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:18.280184   62745 cri.go:89] found id: ""
	I1026 02:06:18.280218   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.280228   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:18.280235   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:18.280297   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:18.314769   62745 cri.go:89] found id: ""
	I1026 02:06:18.314793   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.314803   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:18.314811   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:18.314875   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:18.349686   62745 cri.go:89] found id: ""
	I1026 02:06:18.349712   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.349723   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:18.349731   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:18.349791   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:18.384890   62745 cri.go:89] found id: ""
	I1026 02:06:18.384914   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.384922   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:18.384931   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:18.384951   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:18.436690   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:18.436724   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:18.450449   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:18.450484   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:18.517832   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:18.517858   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:18.517872   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:18.593629   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:18.593671   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:15.281540   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:17.781157   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:18.630340   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:21.129394   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:21.132614   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:21.144963   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:21.145024   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:21.178673   62745 cri.go:89] found id: ""
	I1026 02:06:21.178698   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.178712   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:21.178718   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:21.178766   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:21.209604   62745 cri.go:89] found id: ""
	I1026 02:06:21.209625   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.209633   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:21.209638   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:21.209685   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:21.245359   62745 cri.go:89] found id: ""
	I1026 02:06:21.245387   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.245395   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:21.245401   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:21.245478   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:21.280522   62745 cri.go:89] found id: ""
	I1026 02:06:21.280549   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.280560   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:21.280568   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:21.280632   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:21.311215   62745 cri.go:89] found id: ""
	I1026 02:06:21.311258   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.311269   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:21.311277   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:21.311345   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:21.344383   62745 cri.go:89] found id: ""
	I1026 02:06:21.344408   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.344417   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:21.344423   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:21.344470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:21.375505   62745 cri.go:89] found id: ""
	I1026 02:06:21.375529   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.375537   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:21.375543   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:21.375594   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:21.408845   62745 cri.go:89] found id: ""
	I1026 02:06:21.408872   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.408882   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:21.408893   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:21.408907   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:21.460091   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:21.460132   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:21.472960   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:21.472988   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:21.545280   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:21.545307   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:21.545321   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:21.625622   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:21.625660   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:24.163695   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:24.175697   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:24.175768   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:20.280184   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:22.281612   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:23.629867   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:26.128868   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:28.129197   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:24.207555   62745 cri.go:89] found id: ""
	I1026 02:06:24.207580   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.207590   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:24.207597   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:24.207659   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:24.238550   62745 cri.go:89] found id: ""
	I1026 02:06:24.238577   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.238585   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:24.238593   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:24.238657   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:24.270725   62745 cri.go:89] found id: ""
	I1026 02:06:24.270756   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.270767   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:24.270780   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:24.270840   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:24.304565   62745 cri.go:89] found id: ""
	I1026 02:06:24.304587   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.304595   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:24.304601   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:24.304654   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:24.337792   62745 cri.go:89] found id: ""
	I1026 02:06:24.337820   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.337831   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:24.337840   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:24.337902   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:24.372965   62745 cri.go:89] found id: ""
	I1026 02:06:24.372993   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.373003   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:24.373011   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:24.373071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:24.404874   62745 cri.go:89] found id: ""
	I1026 02:06:24.404902   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.404910   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:24.404915   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:24.404965   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:24.438182   62745 cri.go:89] found id: ""
	I1026 02:06:24.438206   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.438216   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:24.438227   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:24.438241   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:24.487859   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:24.487904   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:24.500443   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:24.500468   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:24.565149   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:24.565173   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:24.565185   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:24.644448   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:24.644483   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:27.190134   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:27.202811   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:27.202866   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:27.234433   62745 cri.go:89] found id: ""
	I1026 02:06:27.234458   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.234469   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:27.234476   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:27.234536   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:27.270714   62745 cri.go:89] found id: ""
	I1026 02:06:27.270736   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.270743   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:27.270750   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:27.270796   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:27.303782   62745 cri.go:89] found id: ""
	I1026 02:06:27.303808   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.303819   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:27.303824   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:27.303873   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:27.333589   62745 cri.go:89] found id: ""
	I1026 02:06:27.333618   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.333629   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:27.333637   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:27.333695   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:27.364461   62745 cri.go:89] found id: ""
	I1026 02:06:27.364490   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.364499   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:27.364506   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:27.364570   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:27.397191   62745 cri.go:89] found id: ""
	I1026 02:06:27.397214   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.397222   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:27.397228   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:27.397288   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:27.427780   62745 cri.go:89] found id: ""
	I1026 02:06:27.427809   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.427819   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:27.427827   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:27.427887   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:27.460702   62745 cri.go:89] found id: ""
	I1026 02:06:27.460728   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.460736   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:27.460745   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:27.460756   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:27.506782   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:27.506815   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:27.519441   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:27.519480   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:27.580627   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:27.580649   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:27.580661   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:27.657114   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:27.657147   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:24.781076   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:27.280414   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:30.129347   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:32.637356   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:30.196989   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:30.210008   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:30.210071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:30.243027   62745 cri.go:89] found id: ""
	I1026 02:06:30.243055   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.243064   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:30.243073   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:30.243133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:30.274236   62745 cri.go:89] found id: ""
	I1026 02:06:30.274269   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.274286   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:30.274294   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:30.274354   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:30.307917   62745 cri.go:89] found id: ""
	I1026 02:06:30.307957   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.307968   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:30.307976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:30.308034   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:30.343579   62745 cri.go:89] found id: ""
	I1026 02:06:30.343611   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.343623   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:30.343631   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:30.343691   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:30.375164   62745 cri.go:89] found id: ""
	I1026 02:06:30.375186   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.375193   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:30.375199   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:30.375254   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:30.408895   62745 cri.go:89] found id: ""
	I1026 02:06:30.408920   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.408930   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:30.408938   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:30.409001   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:30.439274   62745 cri.go:89] found id: ""
	I1026 02:06:30.439296   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.439304   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:30.439310   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:30.439370   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:30.471091   62745 cri.go:89] found id: ""
	I1026 02:06:30.471118   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.471130   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:30.471141   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:30.471154   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:30.547117   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:30.547157   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:30.586923   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:30.586956   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:30.636445   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:30.636472   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:30.649546   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:30.649571   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:30.718659   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:33.219071   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:33.232931   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:33.233002   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:33.264587   62745 cri.go:89] found id: ""
	I1026 02:06:33.264621   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.264633   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:33.264642   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:33.264699   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:33.298613   62745 cri.go:89] found id: ""
	I1026 02:06:33.298640   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.298650   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:33.298658   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:33.298724   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:33.330811   62745 cri.go:89] found id: ""
	I1026 02:06:33.330835   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.330842   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:33.330849   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:33.330896   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:33.361120   62745 cri.go:89] found id: ""
	I1026 02:06:33.361148   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.361158   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:33.361166   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:33.361224   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:33.392734   62745 cri.go:89] found id: ""
	I1026 02:06:33.392763   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.392772   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:33.392778   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:33.392836   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:33.429516   62745 cri.go:89] found id: ""
	I1026 02:06:33.429541   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.429549   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:33.429557   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:33.429608   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:33.465411   62745 cri.go:89] found id: ""
	I1026 02:06:33.465462   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.465472   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:33.465478   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:33.465526   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:33.502158   62745 cri.go:89] found id: ""
	I1026 02:06:33.502181   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.502189   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:33.502197   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:33.502209   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:33.516171   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:33.516200   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:33.581371   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:33.581397   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:33.581409   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:33.660245   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:33.660276   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:33.695652   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:33.695680   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:29.780415   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:31.781001   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:34.280777   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:35.129017   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:37.129305   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:36.246566   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:36.258931   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:36.259002   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:36.290554   62745 cri.go:89] found id: ""
	I1026 02:06:36.290583   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.290594   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:36.290602   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:36.290664   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:36.322351   62745 cri.go:89] found id: ""
	I1026 02:06:36.322380   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.322391   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:36.322400   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:36.322454   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:36.353248   62745 cri.go:89] found id: ""
	I1026 02:06:36.353279   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.353289   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:36.353296   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:36.353352   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:36.386647   62745 cri.go:89] found id: ""
	I1026 02:06:36.386679   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.386687   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:36.386693   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:36.386753   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:36.418688   62745 cri.go:89] found id: ""
	I1026 02:06:36.418714   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.418729   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:36.418738   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:36.418796   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:36.453641   62745 cri.go:89] found id: ""
	I1026 02:06:36.453665   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.453673   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:36.453681   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:36.453736   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:36.486122   62745 cri.go:89] found id: ""
	I1026 02:06:36.486145   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.486152   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:36.486158   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:36.486220   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:36.524894   62745 cri.go:89] found id: ""
	I1026 02:06:36.524918   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.524929   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:36.524938   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:36.524949   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:36.560351   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:36.560380   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:36.610639   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:36.610668   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:36.623311   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:36.623341   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:36.691029   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:36.691048   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:36.691059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:36.281655   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:38.780332   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:39.129912   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:41.628882   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:39.266784   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:39.279857   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:39.279930   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:39.314381   62745 cri.go:89] found id: ""
	I1026 02:06:39.314404   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.314414   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:39.314422   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:39.314485   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:39.345165   62745 cri.go:89] found id: ""
	I1026 02:06:39.345189   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.345195   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:39.345202   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:39.345253   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:39.379326   62745 cri.go:89] found id: ""
	I1026 02:06:39.379358   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.379369   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:39.379376   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:39.379428   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:39.410203   62745 cri.go:89] found id: ""
	I1026 02:06:39.410230   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.410238   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:39.410244   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:39.410343   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:39.445836   62745 cri.go:89] found id: ""
	I1026 02:06:39.445864   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.445874   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:39.445880   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:39.445929   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:39.478581   62745 cri.go:89] found id: ""
	I1026 02:06:39.478611   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.478623   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:39.478630   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:39.478701   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:39.516164   62745 cri.go:89] found id: ""
	I1026 02:06:39.516189   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.516197   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:39.516203   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:39.516247   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:39.547114   62745 cri.go:89] found id: ""
	I1026 02:06:39.547145   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.547156   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:39.547168   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:39.547181   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:39.585134   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:39.585160   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:39.638793   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:39.638825   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:39.652471   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:39.652508   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:39.721286   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:39.721315   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:39.721328   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:42.297344   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:42.310372   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:42.310442   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:42.341290   62745 cri.go:89] found id: ""
	I1026 02:06:42.341321   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.341332   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:42.341339   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:42.341402   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:42.381477   62745 cri.go:89] found id: ""
	I1026 02:06:42.381501   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.381509   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:42.381515   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:42.381569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:42.417909   62745 cri.go:89] found id: ""
	I1026 02:06:42.417933   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.417947   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:42.417955   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:42.418015   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:42.453010   62745 cri.go:89] found id: ""
	I1026 02:06:42.453035   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.453043   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:42.453049   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:42.453107   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:42.487736   62745 cri.go:89] found id: ""
	I1026 02:06:42.487764   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.487776   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:42.487783   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:42.487841   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:42.521791   62745 cri.go:89] found id: ""
	I1026 02:06:42.521813   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.521820   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:42.521826   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:42.521875   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:42.553777   62745 cri.go:89] found id: ""
	I1026 02:06:42.553801   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.553808   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:42.553814   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:42.553864   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:42.584374   62745 cri.go:89] found id: ""
	I1026 02:06:42.584394   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.584402   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:42.584410   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:42.584421   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:42.635442   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:42.635480   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:42.648419   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:42.648449   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:42.714599   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:42.714618   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:42.714629   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:42.791928   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:42.791962   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:40.781303   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:43.281099   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:43.629392   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:46.129197   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:48.129955   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:45.327302   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:45.340107   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:45.340166   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:45.375793   62745 cri.go:89] found id: ""
	I1026 02:06:45.375819   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.375827   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:45.375833   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:45.375890   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:45.407209   62745 cri.go:89] found id: ""
	I1026 02:06:45.407235   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.407243   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:45.407249   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:45.407298   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:45.438793   62745 cri.go:89] found id: ""
	I1026 02:06:45.438825   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.438834   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:45.438841   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:45.438902   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:45.470153   62745 cri.go:89] found id: ""
	I1026 02:06:45.470178   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.470188   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:45.470195   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:45.470256   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:45.501603   62745 cri.go:89] found id: ""
	I1026 02:06:45.501632   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.501642   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:45.501649   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:45.501721   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:45.532431   62745 cri.go:89] found id: ""
	I1026 02:06:45.532457   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.532466   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:45.532472   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:45.532519   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:45.563978   62745 cri.go:89] found id: ""
	I1026 02:06:45.564009   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.564021   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:45.564029   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:45.564092   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:45.596484   62745 cri.go:89] found id: ""
	I1026 02:06:45.596515   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.596526   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:45.596536   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:45.596550   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:45.645740   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:45.645774   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:45.658655   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:45.658678   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:45.722742   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:45.722768   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:45.722797   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:45.800213   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:45.800246   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:48.338048   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:48.350446   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:48.350511   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:48.381651   62745 cri.go:89] found id: ""
	I1026 02:06:48.381675   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.381683   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:48.381689   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:48.381739   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:48.414464   62745 cri.go:89] found id: ""
	I1026 02:06:48.414496   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.414508   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:48.414518   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:48.414578   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:48.446712   62745 cri.go:89] found id: ""
	I1026 02:06:48.446742   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.446775   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:48.446785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:48.446850   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:48.480096   62745 cri.go:89] found id: ""
	I1026 02:06:48.480123   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.480131   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:48.480137   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:48.480191   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:48.514851   62745 cri.go:89] found id: ""
	I1026 02:06:48.514879   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.514890   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:48.514898   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:48.514960   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:48.546665   62745 cri.go:89] found id: ""
	I1026 02:06:48.546690   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.546699   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:48.546706   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:48.546762   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:48.578933   62745 cri.go:89] found id: ""
	I1026 02:06:48.578960   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.578967   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:48.578974   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:48.579033   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:48.610559   62745 cri.go:89] found id: ""
	I1026 02:06:48.610586   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.610594   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:48.610604   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:48.610614   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:48.682337   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:48.682356   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:48.682367   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:48.757174   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:48.757216   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:48.798062   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:48.798093   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:48.846972   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:48.847006   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:45.780251   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:47.781014   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:50.629035   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:52.629893   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:51.361120   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:51.373623   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:51.373694   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:51.410403   62745 cri.go:89] found id: ""
	I1026 02:06:51.410429   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.410437   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:51.410443   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:51.410490   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:51.446998   62745 cri.go:89] found id: ""
	I1026 02:06:51.447029   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.447040   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:51.447048   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:51.447119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:51.482389   62745 cri.go:89] found id: ""
	I1026 02:06:51.482416   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.482425   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:51.482430   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:51.482477   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:51.518224   62745 cri.go:89] found id: ""
	I1026 02:06:51.518247   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.518255   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:51.518261   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:51.518311   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:51.554364   62745 cri.go:89] found id: ""
	I1026 02:06:51.554393   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.554400   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:51.554406   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:51.554453   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:51.590162   62745 cri.go:89] found id: ""
	I1026 02:06:51.590184   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.590193   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:51.590199   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:51.590246   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:51.627329   62745 cri.go:89] found id: ""
	I1026 02:06:51.627351   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.627360   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:51.627368   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:51.627422   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:51.662588   62745 cri.go:89] found id: ""
	I1026 02:06:51.662610   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.662618   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:51.662627   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:51.662637   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:51.676043   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:51.676070   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:51.745339   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:51.745369   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:51.745381   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:51.823074   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:51.823113   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:51.864777   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:51.864810   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:50.280430   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:52.280840   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:54.630165   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:57.129262   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:54.414558   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:54.426859   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:54.426914   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:54.459308   62745 cri.go:89] found id: ""
	I1026 02:06:54.459336   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.459344   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:54.459350   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:54.459407   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:54.492269   62745 cri.go:89] found id: ""
	I1026 02:06:54.492297   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.492305   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:54.492312   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:54.492362   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:54.529884   62745 cri.go:89] found id: ""
	I1026 02:06:54.529909   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.529919   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:54.529926   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:54.529985   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:54.563565   62745 cri.go:89] found id: ""
	I1026 02:06:54.563587   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.563595   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:54.563601   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:54.563667   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:54.598043   62745 cri.go:89] found id: ""
	I1026 02:06:54.598071   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.598081   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:54.598089   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:54.598154   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:54.630479   62745 cri.go:89] found id: ""
	I1026 02:06:54.630504   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.630514   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:54.630521   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:54.630569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:54.664162   62745 cri.go:89] found id: ""
	I1026 02:06:54.664190   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.664202   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:54.664209   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:54.664263   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:54.695829   62745 cri.go:89] found id: ""
	I1026 02:06:54.695859   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.695869   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:54.695879   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:54.695893   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:54.747091   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:54.747124   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:54.760287   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:54.760313   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:54.829243   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:54.829264   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:54.829276   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:54.905695   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:54.905734   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:57.442852   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:57.455134   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:57.455195   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:57.487771   62745 cri.go:89] found id: ""
	I1026 02:06:57.487794   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.487801   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:57.487807   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:57.487855   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:57.522262   62745 cri.go:89] found id: ""
	I1026 02:06:57.522287   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.522294   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:57.522300   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:57.522357   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:57.557463   62745 cri.go:89] found id: ""
	I1026 02:06:57.557497   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.557509   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:57.557516   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:57.557581   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:57.594175   62745 cri.go:89] found id: ""
	I1026 02:06:57.594204   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.594215   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:57.594223   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:57.594290   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:57.631355   62745 cri.go:89] found id: ""
	I1026 02:06:57.631380   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.631389   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:57.631397   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:57.631460   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:57.663128   62745 cri.go:89] found id: ""
	I1026 02:06:57.663156   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.663166   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:57.663174   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:57.663239   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:57.697480   62745 cri.go:89] found id: ""
	I1026 02:06:57.697509   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.697520   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:57.697529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:57.697591   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:57.731295   62745 cri.go:89] found id: ""
	I1026 02:06:57.731328   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.731338   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:57.731348   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:57.731363   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:57.784889   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:57.784927   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:57.797964   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:57.797996   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:57.866042   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:57.866072   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:57.866088   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:57.948186   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:57.948221   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:54.781685   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:57.280613   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:06:59.629790   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:02.129556   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:00.490019   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:00.505005   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:00.505071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:00.537331   62745 cri.go:89] found id: ""
	I1026 02:07:00.537356   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.537364   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:00.537370   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:00.537442   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:00.568650   62745 cri.go:89] found id: ""
	I1026 02:07:00.568683   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.568693   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:00.568712   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:00.568764   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:00.600239   62745 cri.go:89] found id: ""
	I1026 02:07:00.600273   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.600283   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:00.600289   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:00.600340   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:00.631784   62745 cri.go:89] found id: ""
	I1026 02:07:00.631807   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.631814   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:00.631820   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:00.631870   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:00.671299   62745 cri.go:89] found id: ""
	I1026 02:07:00.671325   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.671335   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:00.671343   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:00.671402   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:00.704770   62745 cri.go:89] found id: ""
	I1026 02:07:00.704803   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.704815   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:00.704823   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:00.704878   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:00.738455   62745 cri.go:89] found id: ""
	I1026 02:07:00.738483   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.738495   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:00.738504   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:00.738562   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:00.772180   62745 cri.go:89] found id: ""
	I1026 02:07:00.772205   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.772217   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:00.772225   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:00.772238   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:00.784854   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:00.784877   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:00.859263   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:00.859286   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:00.859300   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:00.933055   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:00.933090   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:00.969165   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:00.969194   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:03.521059   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:03.533917   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:03.533980   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:03.567714   62745 cri.go:89] found id: ""
	I1026 02:07:03.567745   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.567756   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:03.567765   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:03.567816   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:03.600069   62745 cri.go:89] found id: ""
	I1026 02:07:03.600096   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.600104   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:03.600109   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:03.600158   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:03.634048   62745 cri.go:89] found id: ""
	I1026 02:07:03.634069   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.634077   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:03.634085   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:03.634147   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:03.666190   62745 cri.go:89] found id: ""
	I1026 02:07:03.666219   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.666227   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:03.666233   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:03.666284   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:03.698739   62745 cri.go:89] found id: ""
	I1026 02:07:03.698762   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.698770   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:03.698776   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:03.698820   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:03.731198   62745 cri.go:89] found id: ""
	I1026 02:07:03.731227   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.731235   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:03.731242   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:03.731295   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:03.763557   62745 cri.go:89] found id: ""
	I1026 02:07:03.763587   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.763598   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:03.763604   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:03.763666   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:03.797591   62745 cri.go:89] found id: ""
	I1026 02:07:03.797624   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.797635   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:03.797646   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:03.797659   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:03.876991   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:03.877030   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:03.914148   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:03.914174   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:03.964260   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:03.964297   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:03.977178   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:03.977207   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:04.044076   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:59.780106   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:01.781266   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:03.782007   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:04.129887   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:06.629669   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:06.544738   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:06.559517   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:06.559590   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:06.595039   62745 cri.go:89] found id: ""
	I1026 02:07:06.595069   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.595081   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:06.595088   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:06.595150   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:06.634699   62745 cri.go:89] found id: ""
	I1026 02:07:06.634724   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.634734   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:06.634742   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:06.634807   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:06.668025   62745 cri.go:89] found id: ""
	I1026 02:07:06.668057   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.668070   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:06.668077   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:06.668144   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:06.699415   62745 cri.go:89] found id: ""
	I1026 02:07:06.699443   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.699452   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:06.699458   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:06.699518   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:06.731125   62745 cri.go:89] found id: ""
	I1026 02:07:06.731152   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.731163   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:06.731170   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:06.731226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:06.763697   62745 cri.go:89] found id: ""
	I1026 02:07:06.763727   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.763735   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:06.763741   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:06.763797   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:06.796924   62745 cri.go:89] found id: ""
	I1026 02:07:06.796956   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.796964   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:06.796970   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:06.797032   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:06.828696   62745 cri.go:89] found id: ""
	I1026 02:07:06.828724   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.828734   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:06.828745   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:06.828762   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:06.878771   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:06.878816   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:06.892038   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:06.892065   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:06.961856   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:06.961883   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:06.961897   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:07.035069   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:07.035102   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:06.280672   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:08.281667   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:08.630212   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:10.630294   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:12.630350   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:09.571983   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:09.584509   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:09.584583   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:09.619361   62745 cri.go:89] found id: ""
	I1026 02:07:09.619389   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.619400   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:09.619409   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:09.619469   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:09.653625   62745 cri.go:89] found id: ""
	I1026 02:07:09.653653   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.653663   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:09.653671   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:09.653734   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:09.692876   62745 cri.go:89] found id: ""
	I1026 02:07:09.692906   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.692920   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:09.692927   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:09.692989   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:09.726058   62745 cri.go:89] found id: ""
	I1026 02:07:09.726080   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.726088   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:09.726094   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:09.726142   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:09.767085   62745 cri.go:89] found id: ""
	I1026 02:07:09.767106   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.767114   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:09.767120   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:09.767171   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:09.800385   62745 cri.go:89] found id: ""
	I1026 02:07:09.800411   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.800421   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:09.800429   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:09.800490   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:09.833916   62745 cri.go:89] found id: ""
	I1026 02:07:09.833945   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.833955   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:09.833962   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:09.834024   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:09.870980   62745 cri.go:89] found id: ""
	I1026 02:07:09.871011   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.871023   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:09.871034   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:09.871045   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:09.911303   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:09.911339   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:09.985639   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:09.985682   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:10.005161   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:10.005191   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:10.075685   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:10.075707   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:10.075721   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:12.652289   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:12.664631   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:12.664706   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:12.702751   62745 cri.go:89] found id: ""
	I1026 02:07:12.702782   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.702793   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:12.702801   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:12.702856   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:12.736207   62745 cri.go:89] found id: ""
	I1026 02:07:12.736230   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.736240   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:12.736248   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:12.736312   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:12.767932   62745 cri.go:89] found id: ""
	I1026 02:07:12.767962   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.767972   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:12.767980   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:12.768037   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:12.799843   62745 cri.go:89] found id: ""
	I1026 02:07:12.799869   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.799877   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:12.799894   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:12.799947   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:12.831972   62745 cri.go:89] found id: ""
	I1026 02:07:12.832002   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.832014   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:12.832021   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:12.832084   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:12.865967   62745 cri.go:89] found id: ""
	I1026 02:07:12.865995   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.866005   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:12.866013   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:12.866073   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:12.901089   62745 cri.go:89] found id: ""
	I1026 02:07:12.901117   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.901125   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:12.901132   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:12.901187   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:12.933143   62745 cri.go:89] found id: ""
	I1026 02:07:12.933170   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.933178   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:12.933186   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:12.933195   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:13.016014   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:13.016059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:13.058520   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:13.058556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:13.110178   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:13.110219   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:13.124831   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:13.124865   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:13.195503   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:10.781513   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:13.281170   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:14.152750   61346 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000457142s
	I1026 02:07:14.152797   61346 kubeadm.go:310] 
	I1026 02:07:14.152845   61346 kubeadm.go:310] Unfortunately, an error has occurred:
	I1026 02:07:14.152890   61346 kubeadm.go:310] 	context deadline exceeded
	I1026 02:07:14.152898   61346 kubeadm.go:310] 
	I1026 02:07:14.152948   61346 kubeadm.go:310] This error is likely caused by:
	I1026 02:07:14.152997   61346 kubeadm.go:310] 	- The kubelet is not running
	I1026 02:07:14.153161   61346 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:07:14.153195   61346 kubeadm.go:310] 
	I1026 02:07:14.153316   61346 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:07:14.153347   61346 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1026 02:07:14.153385   61346 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1026 02:07:14.153392   61346 kubeadm.go:310] 
	I1026 02:07:14.153519   61346 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:07:14.153622   61346 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:07:14.153730   61346 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1026 02:07:14.153852   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:07:14.153964   61346 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1026 02:07:14.154080   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:07:14.154590   61346 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:07:14.154741   61346 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1026 02:07:14.154843   61346 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 02:07:14.155012   61346 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001868121s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000457142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 02:07:14.155068   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:07:14.845139   61346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:07:14.859829   61346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:07:14.869581   61346 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:07:14.869605   61346 kubeadm.go:157] found existing configuration files:
	
	I1026 02:07:14.869658   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:07:14.879555   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:07:14.879618   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:07:14.888760   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:07:14.897408   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:07:14.897465   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:07:14.906440   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:07:14.915099   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:07:14.915154   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:07:14.924509   61346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:07:14.933049   61346 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:07:14.933105   61346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:07:14.941731   61346 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:07:15.087537   61346 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:07:14.631282   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:17.131383   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:15.695875   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:15.711218   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:15.711288   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:15.769098   62745 cri.go:89] found id: ""
	I1026 02:07:15.769121   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.769129   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:15.769135   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:15.769189   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:15.805018   62745 cri.go:89] found id: ""
	I1026 02:07:15.805046   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.805054   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:15.805061   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:15.805125   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:15.842671   62745 cri.go:89] found id: ""
	I1026 02:07:15.842694   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.842702   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:15.842709   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:15.842757   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:15.874827   62745 cri.go:89] found id: ""
	I1026 02:07:15.874862   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.874873   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:15.874882   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:15.874942   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:15.908597   62745 cri.go:89] found id: ""
	I1026 02:07:15.908623   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.908648   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:15.908655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:15.908713   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:15.943192   62745 cri.go:89] found id: ""
	I1026 02:07:15.943226   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.943237   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:15.943243   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:15.943313   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:15.982067   62745 cri.go:89] found id: ""
	I1026 02:07:15.982096   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.982107   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:15.982114   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:15.982173   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:16.013666   62745 cri.go:89] found id: ""
	I1026 02:07:16.013695   62745 logs.go:282] 0 containers: []
	W1026 02:07:16.013706   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:16.013717   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:16.013732   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:16.064292   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:16.064328   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:16.077236   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:16.077262   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:16.148584   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:16.148612   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:16.148626   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:16.226871   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:16.226905   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:18.765112   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:18.780092   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:18.780166   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:18.813016   62745 cri.go:89] found id: ""
	I1026 02:07:18.813040   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.813047   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:18.813053   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:18.813102   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:18.850376   62745 cri.go:89] found id: ""
	I1026 02:07:18.850399   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.850410   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:18.850417   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:18.850475   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:18.882562   62745 cri.go:89] found id: ""
	I1026 02:07:18.882589   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.882600   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:18.882607   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:18.882665   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:18.915214   62745 cri.go:89] found id: ""
	I1026 02:07:18.915243   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.915253   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:18.915259   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:18.915319   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:18.946171   62745 cri.go:89] found id: ""
	I1026 02:07:18.946197   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.946205   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:18.946211   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:18.946258   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:18.978013   62745 cri.go:89] found id: ""
	I1026 02:07:18.978041   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.978049   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:18.978055   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:18.978111   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:19.016121   62745 cri.go:89] found id: ""
	I1026 02:07:19.016149   62745 logs.go:282] 0 containers: []
	W1026 02:07:19.016161   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:19.016169   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:19.016226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:19.047167   62745 cri.go:89] found id: ""
	I1026 02:07:19.047196   62745 logs.go:282] 0 containers: []
	W1026 02:07:19.047204   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:19.047213   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:19.047222   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:19.098945   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:19.098981   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:19.111645   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:19.111675   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:07:15.782095   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:18.281563   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:19.629184   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:21.630370   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	W1026 02:07:19.178986   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:19.179001   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:19.179012   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:19.251707   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:19.251741   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:21.790677   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:21.803898   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:21.803981   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:21.837240   62745 cri.go:89] found id: ""
	I1026 02:07:21.837267   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.837277   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:21.837283   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:21.837330   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:21.869245   62745 cri.go:89] found id: ""
	I1026 02:07:21.869276   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.869287   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:21.869296   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:21.869356   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:21.899736   62745 cri.go:89] found id: ""
	I1026 02:07:21.899762   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.899771   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:21.899777   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:21.899827   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:21.931420   62745 cri.go:89] found id: ""
	I1026 02:07:21.931439   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.931446   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:21.931453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:21.931498   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:21.963732   62745 cri.go:89] found id: ""
	I1026 02:07:21.963760   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.963768   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:21.963774   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:21.963823   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:21.994522   62745 cri.go:89] found id: ""
	I1026 02:07:21.994550   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.994560   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:21.994567   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:21.994628   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:22.028461   62745 cri.go:89] found id: ""
	I1026 02:07:22.028487   62745 logs.go:282] 0 containers: []
	W1026 02:07:22.028495   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:22.028501   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:22.028548   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:22.069623   62745 cri.go:89] found id: ""
	I1026 02:07:22.069677   62745 logs.go:282] 0 containers: []
	W1026 02:07:22.069692   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:22.069703   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:22.069716   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:22.121635   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:22.121670   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:22.135584   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:22.135617   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:22.199981   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:22.200005   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:22.200021   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:22.279029   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:22.279060   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:20.780736   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:23.280584   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:24.129029   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:26.129698   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:28.136275   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:24.817446   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:24.830485   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:24.830554   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:24.862966   62745 cri.go:89] found id: ""
	I1026 02:07:24.862999   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.863007   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:24.863013   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:24.863070   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:24.894041   62745 cri.go:89] found id: ""
	I1026 02:07:24.894073   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.894084   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:24.894089   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:24.894150   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:24.927062   62745 cri.go:89] found id: ""
	I1026 02:07:24.927093   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.927102   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:24.927108   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:24.927172   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:24.963297   62745 cri.go:89] found id: ""
	I1026 02:07:24.963329   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.963340   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:24.963347   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:24.963409   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:24.998408   62745 cri.go:89] found id: ""
	I1026 02:07:24.998437   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.998446   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:24.998453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:24.998511   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:25.029763   62745 cri.go:89] found id: ""
	I1026 02:07:25.029787   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.029795   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:25.029801   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:25.029859   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:25.066700   62745 cri.go:89] found id: ""
	I1026 02:07:25.066723   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.066730   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:25.066736   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:25.066786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:25.099954   62745 cri.go:89] found id: ""
	I1026 02:07:25.099984   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.099995   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:25.100006   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:25.100021   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:25.149728   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:25.149762   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:25.163029   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:25.163077   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:25.234081   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:25.234103   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:25.234118   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:25.318655   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:25.318690   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:27.862030   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:27.874072   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:27.874138   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:27.905856   62745 cri.go:89] found id: ""
	I1026 02:07:27.905887   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.905895   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:27.905901   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:27.905960   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:27.938698   62745 cri.go:89] found id: ""
	I1026 02:07:27.938724   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.938733   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:27.938738   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:27.938786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:27.971463   62745 cri.go:89] found id: ""
	I1026 02:07:27.971488   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.971495   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:27.971501   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:27.971550   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:28.005774   62745 cri.go:89] found id: ""
	I1026 02:07:28.005802   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.005810   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:28.005815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:28.005867   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:28.038145   62745 cri.go:89] found id: ""
	I1026 02:07:28.038171   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.038179   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:28.038185   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:28.038240   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:28.069925   62745 cri.go:89] found id: ""
	I1026 02:07:28.069956   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.069967   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:28.069976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:28.070030   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:28.102171   62745 cri.go:89] found id: ""
	I1026 02:07:28.102198   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.102206   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:28.102212   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:28.102269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:28.137136   62745 cri.go:89] found id: ""
	I1026 02:07:28.137160   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.137170   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:28.137180   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:28.137204   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:28.187087   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:28.187122   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:28.200246   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:28.200272   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:28.268977   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:28.268997   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:28.269011   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:28.348053   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:28.348085   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:25.280875   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:27.780165   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:30.629746   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:33.129315   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:30.885122   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:30.897635   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:30.897708   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:30.929357   62745 cri.go:89] found id: ""
	I1026 02:07:30.929381   62745 logs.go:282] 0 containers: []
	W1026 02:07:30.929389   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:30.929395   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:30.929470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:30.968281   62745 cri.go:89] found id: ""
	I1026 02:07:30.968313   62745 logs.go:282] 0 containers: []
	W1026 02:07:30.968323   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:30.968330   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:30.968390   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:31.002710   62745 cri.go:89] found id: ""
	I1026 02:07:31.002739   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.002749   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:31.002755   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:31.002815   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:31.034820   62745 cri.go:89] found id: ""
	I1026 02:07:31.034845   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.034853   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:31.034858   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:31.034904   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:31.066878   62745 cri.go:89] found id: ""
	I1026 02:07:31.066906   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.066913   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:31.066926   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:31.066976   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:31.099026   62745 cri.go:89] found id: ""
	I1026 02:07:31.099052   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.099060   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:31.099066   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:31.099119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:31.133025   62745 cri.go:89] found id: ""
	I1026 02:07:31.133056   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.133065   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:31.133070   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:31.133119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:31.165739   62745 cri.go:89] found id: ""
	I1026 02:07:31.165774   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.165785   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:31.165795   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:31.165809   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:31.233734   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:31.233756   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:31.233767   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:31.313364   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:31.313396   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:31.349829   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:31.349864   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:31.400897   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:31.400932   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:33.914141   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:33.926206   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:33.926284   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:33.960359   62745 cri.go:89] found id: ""
	I1026 02:07:33.960390   62745 logs.go:282] 0 containers: []
	W1026 02:07:33.960401   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:33.960408   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:33.960461   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:33.991394   62745 cri.go:89] found id: ""
	I1026 02:07:33.991419   62745 logs.go:282] 0 containers: []
	W1026 02:07:33.991427   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:33.991433   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:33.991491   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:34.023354   62745 cri.go:89] found id: ""
	I1026 02:07:34.023383   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.023394   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:34.023402   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:34.023459   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:34.054427   62745 cri.go:89] found id: ""
	I1026 02:07:34.054452   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.054463   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:34.054470   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:34.054529   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:34.084889   62745 cri.go:89] found id: ""
	I1026 02:07:34.084912   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.084919   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:34.084924   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:34.084975   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:34.116018   62745 cri.go:89] found id: ""
	I1026 02:07:34.116052   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.116063   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:34.116071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:34.116136   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:34.151471   62745 cri.go:89] found id: ""
	I1026 02:07:34.151497   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.151505   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:34.151512   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:34.151558   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:29.781922   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:32.280613   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:34.281574   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:35.629891   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:38.129333   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:34.186774   62745 cri.go:89] found id: ""
	I1026 02:07:34.186807   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.186819   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:34.186831   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:34.186852   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:34.257139   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:34.257159   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:34.257170   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:34.338903   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:34.338935   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:34.375388   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:34.375419   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:34.422999   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:34.423032   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:36.937328   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:36.949435   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:36.949509   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:36.984087   62745 cri.go:89] found id: ""
	I1026 02:07:36.984124   62745 logs.go:282] 0 containers: []
	W1026 02:07:36.984136   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:36.984145   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:36.984206   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:37.019912   62745 cri.go:89] found id: ""
	I1026 02:07:37.019939   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.019947   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:37.019954   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:37.020010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:37.053267   62745 cri.go:89] found id: ""
	I1026 02:07:37.053298   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.053309   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:37.053317   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:37.053378   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:37.085611   62745 cri.go:89] found id: ""
	I1026 02:07:37.085638   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.085646   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:37.085652   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:37.085719   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:37.122232   62745 cri.go:89] found id: ""
	I1026 02:07:37.122261   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.122273   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:37.122281   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:37.122341   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:37.157453   62745 cri.go:89] found id: ""
	I1026 02:07:37.157484   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.157497   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:37.157506   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:37.157571   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:37.190447   62745 cri.go:89] found id: ""
	I1026 02:07:37.190499   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.190511   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:37.190520   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:37.190579   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:37.222653   62745 cri.go:89] found id: ""
	I1026 02:07:37.222693   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.222704   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:37.222715   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:37.222727   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:37.300290   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:37.300334   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:37.342382   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:37.342410   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:37.390612   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:37.390648   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:37.405298   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:37.405324   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:37.468405   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:36.780236   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:38.781488   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:40.130514   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:42.628908   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:39.969006   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:39.981596   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:39.981663   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:40.014471   62745 cri.go:89] found id: ""
	I1026 02:07:40.014498   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.014506   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:40.014513   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:40.014572   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:40.044844   62745 cri.go:89] found id: ""
	I1026 02:07:40.044864   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.044872   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:40.044877   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:40.044931   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:40.076739   62745 cri.go:89] found id: ""
	I1026 02:07:40.076767   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.076778   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:40.076785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:40.076847   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:40.113147   62745 cri.go:89] found id: ""
	I1026 02:07:40.113173   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.113185   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:40.113193   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:40.113248   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:40.144403   62745 cri.go:89] found id: ""
	I1026 02:07:40.144431   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.144441   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:40.144449   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:40.144497   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:40.176560   62745 cri.go:89] found id: ""
	I1026 02:07:40.176585   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.176593   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:40.176599   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:40.176647   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:40.208831   62745 cri.go:89] found id: ""
	I1026 02:07:40.208864   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.208884   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:40.208892   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:40.208949   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:40.247489   62745 cri.go:89] found id: ""
	I1026 02:07:40.247516   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.247527   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:40.247538   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:40.247556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:40.300537   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:40.300570   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:40.313996   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:40.314025   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:40.382390   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:40.382411   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:40.382422   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:40.454832   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:40.454866   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:42.990657   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:43.002906   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:43.002980   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:43.038888   62745 cri.go:89] found id: ""
	I1026 02:07:43.038921   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.038934   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:43.038942   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:43.039007   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:43.071463   62745 cri.go:89] found id: ""
	I1026 02:07:43.071490   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.071500   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:43.071507   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:43.071569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:43.104362   62745 cri.go:89] found id: ""
	I1026 02:07:43.104392   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.104403   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:43.104411   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:43.104469   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:43.137037   62745 cri.go:89] found id: ""
	I1026 02:07:43.137069   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.137080   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:43.137087   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:43.137140   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:43.170616   62745 cri.go:89] found id: ""
	I1026 02:07:43.170641   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.170649   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:43.170655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:43.170709   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:43.203376   62745 cri.go:89] found id: ""
	I1026 02:07:43.203404   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.203412   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:43.203417   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:43.203471   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:43.235154   62745 cri.go:89] found id: ""
	I1026 02:07:43.235177   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.235185   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:43.235190   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:43.235241   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:43.268212   62745 cri.go:89] found id: ""
	I1026 02:07:43.268236   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.268248   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:43.268258   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:43.268270   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:43.339460   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:43.339479   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:43.339493   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:43.422470   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:43.422508   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:43.460588   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:43.460613   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:43.509466   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:43.509500   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:41.280565   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:43.780403   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:44.629599   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:47.129345   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:46.023798   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:46.036335   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:46.036394   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:46.069673   62745 cri.go:89] found id: ""
	I1026 02:07:46.069698   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.069706   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:46.069712   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:46.069760   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:46.101565   62745 cri.go:89] found id: ""
	I1026 02:07:46.101590   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.101599   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:46.101606   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:46.101668   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:46.133748   62745 cri.go:89] found id: ""
	I1026 02:07:46.133776   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.133786   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:46.133794   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:46.133851   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:46.164918   62745 cri.go:89] found id: ""
	I1026 02:07:46.164953   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.164963   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:46.164972   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:46.165029   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:46.198417   62745 cri.go:89] found id: ""
	I1026 02:07:46.198439   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.198446   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:46.198452   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:46.198507   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:46.233857   62745 cri.go:89] found id: ""
	I1026 02:07:46.233882   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.233891   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:46.233896   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:46.233943   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:46.267445   62745 cri.go:89] found id: ""
	I1026 02:07:46.267476   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.267485   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:46.267498   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:46.267547   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:46.300564   62745 cri.go:89] found id: ""
	I1026 02:07:46.300594   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.300601   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:46.300609   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:46.300619   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:46.353129   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:46.353163   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:46.366154   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:46.366183   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:46.439252   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:46.439271   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:46.439286   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:46.519713   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:46.519748   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:49.057451   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:49.070194   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:49.070269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:49.102886   62745 cri.go:89] found id: ""
	I1026 02:07:49.102915   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.102926   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:49.102935   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:49.102994   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:49.134727   62745 cri.go:89] found id: ""
	I1026 02:07:49.134755   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.134765   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:49.134773   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:49.134832   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:49.166121   62745 cri.go:89] found id: ""
	I1026 02:07:49.166148   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.166158   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:49.166166   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:49.166223   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:46.280751   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:48.293307   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:49.129659   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:51.135415   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:49.197999   62745 cri.go:89] found id: ""
	I1026 02:07:49.198033   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.198045   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:49.198052   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:49.198111   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:49.231619   62745 cri.go:89] found id: ""
	I1026 02:07:49.231649   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.231661   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:49.231669   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:49.231733   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:49.264930   62745 cri.go:89] found id: ""
	I1026 02:07:49.264961   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.264973   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:49.264981   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:49.265040   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:49.298194   62745 cri.go:89] found id: ""
	I1026 02:07:49.298226   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.298237   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:49.298244   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:49.298304   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:49.330293   62745 cri.go:89] found id: ""
	I1026 02:07:49.330325   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.330336   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:49.330346   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:49.330361   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:49.365408   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:49.365457   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:49.415642   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:49.415677   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:49.428140   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:49.428168   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:49.499178   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:49.499205   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:49.499220   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:52.079906   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:52.093071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:52.093149   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:52.126358   62745 cri.go:89] found id: ""
	I1026 02:07:52.126381   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.126389   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:52.126402   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:52.126461   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:52.159802   62745 cri.go:89] found id: ""
	I1026 02:07:52.159833   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.159844   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:52.159852   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:52.159914   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:52.194500   62745 cri.go:89] found id: ""
	I1026 02:07:52.194530   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.194541   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:52.194555   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:52.194616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:52.229565   62745 cri.go:89] found id: ""
	I1026 02:07:52.229589   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.229597   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:52.229603   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:52.229664   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:52.265769   62745 cri.go:89] found id: ""
	I1026 02:07:52.265808   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.265819   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:52.265827   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:52.265887   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:52.299292   62745 cri.go:89] found id: ""
	I1026 02:07:52.299316   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.299324   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:52.299330   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:52.299384   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:52.332085   62745 cri.go:89] found id: ""
	I1026 02:07:52.332108   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.332116   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:52.332122   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:52.332180   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:52.364882   62745 cri.go:89] found id: ""
	I1026 02:07:52.364907   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.364915   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:52.364923   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:52.364934   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:52.401295   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:52.401326   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:52.452282   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:52.452315   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:52.465630   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:52.465659   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:52.532282   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:52.532303   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:52.532316   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:50.780616   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:53.280433   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:53.629845   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:56.129375   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:58.129497   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:55.107880   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:55.120420   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:55.120498   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:55.154952   62745 cri.go:89] found id: ""
	I1026 02:07:55.154981   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.154991   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:55.154997   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:55.155046   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:55.189882   62745 cri.go:89] found id: ""
	I1026 02:07:55.189909   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.189919   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:55.189935   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:55.189985   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:55.221941   62745 cri.go:89] found id: ""
	I1026 02:07:55.221965   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.221973   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:55.221979   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:55.222027   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:55.268127   62745 cri.go:89] found id: ""
	I1026 02:07:55.268155   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.268165   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:55.268173   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:55.268229   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:55.301559   62745 cri.go:89] found id: ""
	I1026 02:07:55.301583   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.301591   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:55.301597   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:55.301644   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:55.335479   62745 cri.go:89] found id: ""
	I1026 02:07:55.335509   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.335521   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:55.335529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:55.335601   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:55.366749   62745 cri.go:89] found id: ""
	I1026 02:07:55.366771   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.366779   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:55.366785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:55.366847   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:55.397880   62745 cri.go:89] found id: ""
	I1026 02:07:55.397906   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.397912   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:55.397920   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:55.397937   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:55.465665   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:55.465688   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:55.465704   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:55.543012   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:55.543052   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:55.578358   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:55.578388   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:55.631250   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:55.631282   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:58.144367   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:58.156714   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:58.156792   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:58.189562   62745 cri.go:89] found id: ""
	I1026 02:07:58.189587   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.189595   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:58.189626   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:58.189687   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:58.222695   62745 cri.go:89] found id: ""
	I1026 02:07:58.222721   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.222729   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:58.222735   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:58.222795   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:58.260873   62745 cri.go:89] found id: ""
	I1026 02:07:58.260904   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.260916   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:58.260924   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:58.260991   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:58.294508   62745 cri.go:89] found id: ""
	I1026 02:07:58.294535   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.294546   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:58.294553   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:58.294616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:58.327554   62745 cri.go:89] found id: ""
	I1026 02:07:58.327575   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.327582   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:58.327588   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:58.327649   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:58.364191   62745 cri.go:89] found id: ""
	I1026 02:07:58.364221   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.364229   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:58.364235   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:58.364294   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:58.395374   62745 cri.go:89] found id: ""
	I1026 02:07:58.395399   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.395407   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:58.395413   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:58.395470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:58.428051   62745 cri.go:89] found id: ""
	I1026 02:07:58.428094   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.428105   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:58.428115   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:58.428130   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:58.478234   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:58.478270   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:58.490968   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:58.490991   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:58.570380   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:58.570402   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:58.570414   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:58.648280   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:58.648313   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:55.280822   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:07:57.781150   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:00.629488   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:02.630607   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:01.184828   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:01.197285   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:01.197344   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:01.232327   62745 cri.go:89] found id: ""
	I1026 02:08:01.232352   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.232360   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:01.232366   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:01.232413   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:01.264467   62745 cri.go:89] found id: ""
	I1026 02:08:01.264495   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.264507   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:01.264514   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:01.264564   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:01.306169   62745 cri.go:89] found id: ""
	I1026 02:08:01.306195   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.306205   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:01.306213   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:01.306279   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:01.339428   62745 cri.go:89] found id: ""
	I1026 02:08:01.339456   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.339468   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:01.339476   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:01.339537   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:01.371483   62745 cri.go:89] found id: ""
	I1026 02:08:01.371514   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.371525   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:01.371533   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:01.371594   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:01.403778   62745 cri.go:89] found id: ""
	I1026 02:08:01.403801   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.403809   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:01.403815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:01.403866   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:01.436030   62745 cri.go:89] found id: ""
	I1026 02:08:01.436054   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.436064   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:01.436071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:01.436133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:01.469437   62745 cri.go:89] found id: ""
	I1026 02:08:01.469471   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.469481   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:01.469492   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:01.469506   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:01.518183   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:01.518218   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:01.531223   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:01.531255   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:01.596036   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:01.596063   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:01.596080   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:01.672819   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:01.672856   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:59.781540   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:02.280933   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:04.281647   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:05.130116   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:07.629880   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:04.239826   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:04.254481   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:04.254545   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:04.295642   62745 cri.go:89] found id: ""
	I1026 02:08:04.295674   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.295683   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:04.295689   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:04.295738   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:04.328260   62745 cri.go:89] found id: ""
	I1026 02:08:04.328281   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.328289   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:04.328295   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:04.328342   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:04.364236   62745 cri.go:89] found id: ""
	I1026 02:08:04.364262   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.364271   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:04.364278   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:04.364340   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:04.397430   62745 cri.go:89] found id: ""
	I1026 02:08:04.397457   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.397466   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:04.397474   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:04.397533   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:04.433899   62745 cri.go:89] found id: ""
	I1026 02:08:04.433927   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.433938   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:04.433945   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:04.434010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:04.472230   62745 cri.go:89] found id: ""
	I1026 02:08:04.472263   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.472274   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:04.472281   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:04.472341   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:04.509655   62745 cri.go:89] found id: ""
	I1026 02:08:04.509679   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.509689   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:04.509695   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:04.509757   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:04.546581   62745 cri.go:89] found id: ""
	I1026 02:08:04.546610   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.546622   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:04.546630   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:04.546641   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:04.620875   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:04.620898   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:04.620912   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:04.695375   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:04.695410   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:04.731475   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:04.731505   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:04.785649   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:04.785677   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:07.300233   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:07.312696   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:07.312767   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:07.349242   62745 cri.go:89] found id: ""
	I1026 02:08:07.349274   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.349285   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:07.349292   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:07.349357   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:07.382578   62745 cri.go:89] found id: ""
	I1026 02:08:07.382606   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.382616   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:07.382623   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:07.382683   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:07.423434   62745 cri.go:89] found id: ""
	I1026 02:08:07.423465   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.423477   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:07.423484   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:07.423542   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:07.464035   62745 cri.go:89] found id: ""
	I1026 02:08:07.464058   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.464065   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:07.464070   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:07.464122   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:07.508768   62745 cri.go:89] found id: ""
	I1026 02:08:07.508794   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.508802   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:07.508808   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:07.508854   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:07.542755   62745 cri.go:89] found id: ""
	I1026 02:08:07.542784   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.542792   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:07.542798   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:07.542843   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:07.573819   62745 cri.go:89] found id: ""
	I1026 02:08:07.573850   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.573860   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:07.573868   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:07.573926   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:07.610126   62745 cri.go:89] found id: ""
	I1026 02:08:07.610150   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.610163   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:07.610170   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:07.610182   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:07.650919   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:07.650950   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:07.703138   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:07.703174   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:07.716055   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:07.716078   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:07.783214   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:07.783236   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:07.783250   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:06.780832   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:09.280520   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:10.129008   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:12.629717   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:10.357930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:10.372839   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:10.372911   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:10.408799   62745 cri.go:89] found id: ""
	I1026 02:08:10.408823   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.408832   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:10.408838   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:10.408896   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:10.444727   62745 cri.go:89] found id: ""
	I1026 02:08:10.444759   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.444774   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:10.444781   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:10.444840   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:10.477628   62745 cri.go:89] found id: ""
	I1026 02:08:10.477659   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.477668   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:10.477674   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:10.477732   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:10.518985   62745 cri.go:89] found id: ""
	I1026 02:08:10.519010   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.519021   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:10.519028   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:10.519091   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:10.551984   62745 cri.go:89] found id: ""
	I1026 02:08:10.552011   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.552019   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:10.552026   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:10.552086   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:10.583502   62745 cri.go:89] found id: ""
	I1026 02:08:10.583530   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.583540   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:10.583548   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:10.583615   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:10.615570   62745 cri.go:89] found id: ""
	I1026 02:08:10.615600   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.615611   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:10.615619   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:10.615680   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:10.660675   62745 cri.go:89] found id: ""
	I1026 02:08:10.660714   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.660725   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:10.660737   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:10.660750   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:10.711969   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:10.712001   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:10.725496   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:10.725523   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:10.790976   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:10.791002   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:10.791016   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:10.871832   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:10.871865   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:13.409930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:13.422624   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:13.422705   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:13.455147   62745 cri.go:89] found id: ""
	I1026 02:08:13.455174   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.455185   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:13.455192   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:13.455261   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:13.486676   62745 cri.go:89] found id: ""
	I1026 02:08:13.486700   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.486709   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:13.486715   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:13.486769   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:13.518163   62745 cri.go:89] found id: ""
	I1026 02:08:13.518190   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.518198   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:13.518204   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:13.518259   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:13.550442   62745 cri.go:89] found id: ""
	I1026 02:08:13.550472   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.550480   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:13.550486   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:13.550546   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:13.581575   62745 cri.go:89] found id: ""
	I1026 02:08:13.581604   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.581626   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:13.581632   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:13.581689   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:13.617049   62745 cri.go:89] found id: ""
	I1026 02:08:13.617085   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.617097   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:13.617105   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:13.617157   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:13.650969   62745 cri.go:89] found id: ""
	I1026 02:08:13.650994   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.651004   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:13.651012   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:13.651073   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:13.688760   62745 cri.go:89] found id: ""
	I1026 02:08:13.688785   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.688792   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:13.688800   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:13.688810   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:13.737744   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:13.737783   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:13.750768   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:13.750792   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:13.825287   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:13.825312   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:13.825325   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:13.903847   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:13.903889   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:11.280854   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:13.781402   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:14.629869   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:17.129444   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:16.440337   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:16.454191   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:16.454252   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:16.495504   62745 cri.go:89] found id: ""
	I1026 02:08:16.495537   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.495549   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:16.495556   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:16.495616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:16.529098   62745 cri.go:89] found id: ""
	I1026 02:08:16.529125   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.529134   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:16.529140   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:16.529188   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:16.565347   62745 cri.go:89] found id: ""
	I1026 02:08:16.565376   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.565384   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:16.565390   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:16.565462   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:16.602635   62745 cri.go:89] found id: ""
	I1026 02:08:16.602659   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.602667   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:16.602674   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:16.602725   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:16.634610   62745 cri.go:89] found id: ""
	I1026 02:08:16.634636   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.634646   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:16.634655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:16.634723   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:16.665466   62745 cri.go:89] found id: ""
	I1026 02:08:16.665495   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.665508   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:16.665516   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:16.665574   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:16.705100   62745 cri.go:89] found id: ""
	I1026 02:08:16.705130   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.705142   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:16.705150   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:16.705209   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:16.738037   62745 cri.go:89] found id: ""
	I1026 02:08:16.738067   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.738075   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:16.738083   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:16.738094   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:16.773953   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:16.773978   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:16.825028   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:16.825063   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:16.837494   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:16.837524   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:16.912281   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:16.912298   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:16.912311   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:16.280753   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:18.281498   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:19.629969   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:22.129063   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:19.493012   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:19.505677   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:19.505752   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:19.537587   62745 cri.go:89] found id: ""
	I1026 02:08:19.537609   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.537618   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:19.537630   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:19.537702   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:19.569151   62745 cri.go:89] found id: ""
	I1026 02:08:19.569180   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.569191   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:19.569199   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:19.569259   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:19.602798   62745 cri.go:89] found id: ""
	I1026 02:08:19.602829   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.602840   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:19.602848   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:19.602906   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:19.635291   62745 cri.go:89] found id: ""
	I1026 02:08:19.635313   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.635320   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:19.635326   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:19.635381   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:19.670775   62745 cri.go:89] found id: ""
	I1026 02:08:19.670801   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.670808   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:19.670815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:19.670863   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:19.707295   62745 cri.go:89] found id: ""
	I1026 02:08:19.707322   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.707333   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:19.707341   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:19.707408   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:19.741160   62745 cri.go:89] found id: ""
	I1026 02:08:19.741181   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.741189   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:19.741195   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:19.741255   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:19.772764   62745 cri.go:89] found id: ""
	I1026 02:08:19.772797   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.772807   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:19.772816   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:19.772827   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:19.820416   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:19.820455   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:19.833864   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:19.833892   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:19.901887   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:19.901912   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:19.901926   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:19.975742   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:19.975777   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:22.513110   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:22.525810   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:22.525885   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:22.558634   62745 cri.go:89] found id: ""
	I1026 02:08:22.558665   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.558676   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:22.558683   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:22.558740   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:22.590074   62745 cri.go:89] found id: ""
	I1026 02:08:22.590100   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.590109   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:22.590115   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:22.590171   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:22.622736   62745 cri.go:89] found id: ""
	I1026 02:08:22.622759   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.622766   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:22.622773   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:22.622826   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:22.660241   62745 cri.go:89] found id: ""
	I1026 02:08:22.660278   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.660289   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:22.660297   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:22.660358   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:22.694328   62745 cri.go:89] found id: ""
	I1026 02:08:22.694352   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.694362   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:22.694369   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:22.694435   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:22.725943   62745 cri.go:89] found id: ""
	I1026 02:08:22.725973   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.725982   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:22.725990   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:22.726050   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:22.761196   62745 cri.go:89] found id: ""
	I1026 02:08:22.761221   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.761230   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:22.761237   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:22.761300   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:22.794536   62745 cri.go:89] found id: ""
	I1026 02:08:22.794557   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.794564   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:22.794571   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:22.794583   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:22.806661   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:22.806685   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:22.871740   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:22.871760   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:22.871774   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:22.946659   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:22.946694   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:22.986919   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:22.986944   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:20.780113   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:22.780840   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:24.129965   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:26.629660   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:25.532589   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:25.544793   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:25.544862   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:25.578566   62745 cri.go:89] found id: ""
	I1026 02:08:25.578596   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.578605   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:25.578611   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:25.578668   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:25.611999   62745 cri.go:89] found id: ""
	I1026 02:08:25.612023   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.612031   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:25.612037   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:25.612095   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:25.644308   62745 cri.go:89] found id: ""
	I1026 02:08:25.644330   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.644338   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:25.644344   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:25.644408   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:25.676010   62745 cri.go:89] found id: ""
	I1026 02:08:25.676036   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.676044   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:25.676051   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:25.676109   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:25.711676   62745 cri.go:89] found id: ""
	I1026 02:08:25.711704   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.711712   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:25.711719   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:25.711771   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:25.747402   62745 cri.go:89] found id: ""
	I1026 02:08:25.747429   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.747440   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:25.747448   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:25.747497   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:25.783460   62745 cri.go:89] found id: ""
	I1026 02:08:25.783483   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.783492   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:25.783499   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:25.783556   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:25.815189   62745 cri.go:89] found id: ""
	I1026 02:08:25.815218   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.815232   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:25.815242   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:25.815256   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:25.890691   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:25.890731   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:25.930586   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:25.930621   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:25.980506   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:25.980540   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:25.993501   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:25.993532   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:26.054846   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:28.556014   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:28.568620   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:28.568680   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:28.603011   62745 cri.go:89] found id: ""
	I1026 02:08:28.603041   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.603052   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:28.603062   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:28.603125   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:28.638080   62745 cri.go:89] found id: ""
	I1026 02:08:28.638114   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.638124   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:28.638133   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:28.638195   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:28.673207   62745 cri.go:89] found id: ""
	I1026 02:08:28.673234   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.673245   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:28.673251   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:28.673306   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:28.709564   62745 cri.go:89] found id: ""
	I1026 02:08:28.709587   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.709596   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:28.709602   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:28.709660   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:28.745873   62745 cri.go:89] found id: ""
	I1026 02:08:28.745899   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.745907   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:28.745913   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:28.745978   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:28.779839   62745 cri.go:89] found id: ""
	I1026 02:08:28.779865   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.779876   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:28.779892   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:28.779948   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:28.813925   62745 cri.go:89] found id: ""
	I1026 02:08:28.813949   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.813957   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:28.813964   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:28.814010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:28.847919   62745 cri.go:89] found id: ""
	I1026 02:08:28.847944   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.847951   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:28.847961   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:28.847973   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:28.916176   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:28.916197   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:28.916209   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:28.996542   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:28.996577   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:29.037045   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:29.037070   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:29.087027   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:29.087059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:25.280780   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:27.780069   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:28.630570   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:31.128505   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:33.128541   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:31.603457   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:31.615817   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:31.615876   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:31.651806   62745 cri.go:89] found id: ""
	I1026 02:08:31.651830   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.651840   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:31.651848   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:31.651908   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:31.684606   62745 cri.go:89] found id: ""
	I1026 02:08:31.684635   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.684645   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:31.684653   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:31.684712   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:31.717923   62745 cri.go:89] found id: ""
	I1026 02:08:31.717954   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.717966   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:31.717976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:31.718041   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:31.751740   62745 cri.go:89] found id: ""
	I1026 02:08:31.751770   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.751781   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:31.751789   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:31.751848   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:31.784175   62745 cri.go:89] found id: ""
	I1026 02:08:31.784244   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.784261   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:31.784271   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:31.784330   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:31.817523   62745 cri.go:89] found id: ""
	I1026 02:08:31.817552   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.817563   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:31.817572   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:31.817634   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:31.849001   62745 cri.go:89] found id: ""
	I1026 02:08:31.849034   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.849047   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:31.849055   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:31.849105   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:31.879403   62745 cri.go:89] found id: ""
	I1026 02:08:31.879431   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.879456   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:31.879464   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:31.879487   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:31.942447   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:31.942474   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:31.942488   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:32.021986   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:32.022022   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:32.056609   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:32.056636   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:32.105273   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:32.105304   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:29.781807   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:32.283383   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:35.129893   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:37.629720   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:34.618372   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:34.630895   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:34.630972   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:34.665359   62745 cri.go:89] found id: ""
	I1026 02:08:34.665390   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.665402   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:34.665410   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:34.665486   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:34.696082   62745 cri.go:89] found id: ""
	I1026 02:08:34.696109   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.696118   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:34.696126   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:34.696190   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:34.728736   62745 cri.go:89] found id: ""
	I1026 02:08:34.728763   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.728772   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:34.728778   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:34.728834   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:34.760581   62745 cri.go:89] found id: ""
	I1026 02:08:34.760614   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.760625   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:34.760633   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:34.760690   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:34.792050   62745 cri.go:89] found id: ""
	I1026 02:08:34.792071   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.792079   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:34.792085   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:34.792141   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:34.823661   62745 cri.go:89] found id: ""
	I1026 02:08:34.823689   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.823704   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:34.823710   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:34.823758   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:34.858707   62745 cri.go:89] found id: ""
	I1026 02:08:34.858732   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.858743   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:34.858751   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:34.858809   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:34.889620   62745 cri.go:89] found id: ""
	I1026 02:08:34.889648   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.889660   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:34.889670   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:34.889683   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:34.938323   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:34.938355   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:34.950839   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:34.950864   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:35.022103   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:35.022131   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:35.022146   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:35.105889   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:35.105933   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:37.647963   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:37.660729   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:37.660801   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:37.694126   62745 cri.go:89] found id: ""
	I1026 02:08:37.694154   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.694165   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:37.694173   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:37.694226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:37.725639   62745 cri.go:89] found id: ""
	I1026 02:08:37.725671   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.725681   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:37.725693   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:37.725742   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:37.757094   62745 cri.go:89] found id: ""
	I1026 02:08:37.757121   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.757132   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:37.757140   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:37.757199   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:37.790413   62745 cri.go:89] found id: ""
	I1026 02:08:37.790440   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.790447   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:37.790453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:37.790500   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:37.824258   62745 cri.go:89] found id: ""
	I1026 02:08:37.824284   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.824292   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:37.824298   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:37.824345   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:37.854922   62745 cri.go:89] found id: ""
	I1026 02:08:37.854957   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.854969   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:37.854978   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:37.855043   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:37.891129   62745 cri.go:89] found id: ""
	I1026 02:08:37.891157   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.891168   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:37.891175   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:37.891236   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:37.925548   62745 cri.go:89] found id: ""
	I1026 02:08:37.925582   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.925594   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:37.925605   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:37.925618   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:38.003275   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:38.003308   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:38.044114   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:38.044147   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:38.098885   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:38.098916   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:38.111804   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:38.111829   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:38.175922   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:34.780351   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:36.780711   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:39.279974   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:40.128903   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:42.132558   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:40.676707   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:40.689205   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:40.689269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:40.721318   62745 cri.go:89] found id: ""
	I1026 02:08:40.721346   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.721354   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:40.721360   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:40.721438   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:40.753839   62745 cri.go:89] found id: ""
	I1026 02:08:40.753872   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.753883   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:40.753891   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:40.753953   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:40.787788   62745 cri.go:89] found id: ""
	I1026 02:08:40.787815   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.787827   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:40.787835   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:40.787892   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:40.822322   62745 cri.go:89] found id: ""
	I1026 02:08:40.822353   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.822365   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:40.822373   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:40.822437   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:40.855255   62745 cri.go:89] found id: ""
	I1026 02:08:40.855281   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.855291   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:40.855299   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:40.855358   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:40.888181   62745 cri.go:89] found id: ""
	I1026 02:08:40.888206   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.888215   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:40.888220   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:40.888271   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:40.924334   62745 cri.go:89] found id: ""
	I1026 02:08:40.924361   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.924370   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:40.924376   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:40.924426   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:40.961191   62745 cri.go:89] found id: ""
	I1026 02:08:40.961216   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.961224   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:40.961231   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:40.961261   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:40.973567   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:40.973590   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:41.039495   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:41.039515   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:41.039527   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:41.116293   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:41.116330   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:41.153112   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:41.153138   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:43.702627   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:43.715096   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:43.715160   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:43.751422   62745 cri.go:89] found id: ""
	I1026 02:08:43.751452   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.751460   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:43.751468   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:43.751531   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:43.785497   62745 cri.go:89] found id: ""
	I1026 02:08:43.785522   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.785529   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:43.785534   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:43.785578   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:43.817202   62745 cri.go:89] found id: ""
	I1026 02:08:43.817226   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.817233   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:43.817240   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:43.817299   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:43.849679   62745 cri.go:89] found id: ""
	I1026 02:08:43.849700   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.849707   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:43.849713   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:43.849771   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:43.881980   62745 cri.go:89] found id: ""
	I1026 02:08:43.882006   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.882017   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:43.882024   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:43.882085   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:43.912117   62745 cri.go:89] found id: ""
	I1026 02:08:43.912143   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.912155   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:43.912162   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:43.912224   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:43.946380   62745 cri.go:89] found id: ""
	I1026 02:08:43.946407   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.946414   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:43.946420   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:43.946470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:43.982498   62745 cri.go:89] found id: ""
	I1026 02:08:43.982533   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.982544   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:43.982555   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:43.982568   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:44.059851   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:44.059889   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:44.097961   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:44.097994   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:44.150021   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:44.150064   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:44.163400   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:44.163421   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:08:41.280582   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:43.281301   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:44.629048   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:46.629705   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	W1026 02:08:44.229895   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:46.730182   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:46.743267   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:46.743346   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:46.777313   62745 cri.go:89] found id: ""
	I1026 02:08:46.777346   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.777358   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:46.777365   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:46.777444   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:46.810378   62745 cri.go:89] found id: ""
	I1026 02:08:46.810416   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.810428   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:46.810436   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:46.810502   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:46.842669   62745 cri.go:89] found id: ""
	I1026 02:08:46.842700   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.842710   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:46.842718   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:46.842779   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:46.875247   62745 cri.go:89] found id: ""
	I1026 02:08:46.875274   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.875285   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:46.875292   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:46.875355   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:46.905475   62745 cri.go:89] found id: ""
	I1026 02:08:46.905501   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.905509   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:46.905514   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:46.905563   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:46.936029   62745 cri.go:89] found id: ""
	I1026 02:08:46.936050   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.936057   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:46.936064   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:46.936108   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:46.968276   62745 cri.go:89] found id: ""
	I1026 02:08:46.968308   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.968319   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:46.968326   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:46.968388   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:47.014097   62745 cri.go:89] found id: ""
	I1026 02:08:47.014124   62745 logs.go:282] 0 containers: []
	W1026 02:08:47.014132   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:47.014140   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:47.014152   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:47.052220   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:47.052244   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:47.107413   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:47.107458   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:47.119973   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:47.120001   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:47.190031   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:47.190049   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:47.190060   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:45.780498   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:47.780620   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:49.129574   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:51.129759   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:53.130061   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:49.764726   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:49.777467   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:49.777541   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:49.808972   62745 cri.go:89] found id: ""
	I1026 02:08:49.809002   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.809013   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:49.809021   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:49.809084   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:49.841093   62745 cri.go:89] found id: ""
	I1026 02:08:49.841122   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.841130   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:49.841136   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:49.841193   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:49.875478   62745 cri.go:89] found id: ""
	I1026 02:08:49.875509   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.875521   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:49.875529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:49.875595   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:49.908860   62745 cri.go:89] found id: ""
	I1026 02:08:49.908891   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.908901   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:49.908907   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:49.908972   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:49.941113   62745 cri.go:89] found id: ""
	I1026 02:08:49.941137   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.941144   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:49.941150   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:49.941198   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:49.973200   62745 cri.go:89] found id: ""
	I1026 02:08:49.973228   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.973239   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:49.973247   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:49.973307   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:50.006174   62745 cri.go:89] found id: ""
	I1026 02:08:50.006203   62745 logs.go:282] 0 containers: []
	W1026 02:08:50.006213   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:50.006221   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:50.006291   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:50.039623   62745 cri.go:89] found id: ""
	I1026 02:08:50.039652   62745 logs.go:282] 0 containers: []
	W1026 02:08:50.039675   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:50.039686   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:50.039701   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:50.091561   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:50.091600   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:50.105026   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:50.105054   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:50.174188   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:50.174211   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:50.174226   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:50.256489   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:50.256525   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:52.795154   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:52.807276   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:52.807342   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:52.842173   62745 cri.go:89] found id: ""
	I1026 02:08:52.842199   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.842210   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:52.842218   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:52.842270   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:52.875913   62745 cri.go:89] found id: ""
	I1026 02:08:52.875942   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.875953   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:52.875960   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:52.876020   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:52.906944   62745 cri.go:89] found id: ""
	I1026 02:08:52.906972   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.906980   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:52.906988   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:52.907046   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:52.939621   62745 cri.go:89] found id: ""
	I1026 02:08:52.939653   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.939664   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:52.939671   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:52.939786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:52.970960   62745 cri.go:89] found id: ""
	I1026 02:08:52.970992   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.971003   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:52.971011   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:52.971079   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:53.003974   62745 cri.go:89] found id: ""
	I1026 02:08:53.004005   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.004016   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:53.004024   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:53.004083   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:53.036906   62745 cri.go:89] found id: ""
	I1026 02:08:53.036930   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.036938   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:53.036944   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:53.036998   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:53.066878   62745 cri.go:89] found id: ""
	I1026 02:08:53.066904   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.066924   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:53.066934   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:53.066948   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:53.079228   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:53.079250   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:53.143347   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:53.143378   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:53.143391   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:53.218363   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:53.218399   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:53.254757   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:53.254793   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:49.781985   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:52.280363   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:54.282039   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:55.629006   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:57.630278   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:55.806558   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:55.819075   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:55.819143   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:55.851175   62745 cri.go:89] found id: ""
	I1026 02:08:55.851197   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.851205   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:55.851211   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:55.851270   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:55.882873   62745 cri.go:89] found id: ""
	I1026 02:08:55.882900   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.882909   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:55.882918   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:55.882979   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:55.915889   62745 cri.go:89] found id: ""
	I1026 02:08:55.915911   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.915922   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:55.915927   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:55.915983   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:55.948031   62745 cri.go:89] found id: ""
	I1026 02:08:55.948060   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.948072   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:55.948079   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:55.948136   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:55.979736   62745 cri.go:89] found id: ""
	I1026 02:08:55.979762   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.979771   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:55.979781   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:55.979829   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:56.011942   62745 cri.go:89] found id: ""
	I1026 02:08:56.011975   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.011983   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:56.011990   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:56.012042   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:56.047602   62745 cri.go:89] found id: ""
	I1026 02:08:56.047630   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.047638   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:56.047645   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:56.047732   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:56.078132   62745 cri.go:89] found id: ""
	I1026 02:08:56.078162   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.078172   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:56.078183   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:56.078202   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:56.090232   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:56.090259   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:56.152734   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:56.152757   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:56.152770   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:56.234437   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:56.234471   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:56.273058   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:56.273088   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:58.827935   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:58.840067   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:58.840133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:58.872130   62745 cri.go:89] found id: ""
	I1026 02:08:58.872155   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.872163   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:58.872169   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:58.872219   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:58.904718   62745 cri.go:89] found id: ""
	I1026 02:08:58.904744   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.904752   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:58.904757   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:58.904804   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:58.936774   62745 cri.go:89] found id: ""
	I1026 02:08:58.936797   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.936806   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:58.936814   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:58.936872   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:58.972820   62745 cri.go:89] found id: ""
	I1026 02:08:58.972841   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.972848   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:58.972855   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:58.972912   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:59.006748   62745 cri.go:89] found id: ""
	I1026 02:08:59.006780   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.006791   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:59.006799   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:59.006851   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:59.037699   62745 cri.go:89] found id: ""
	I1026 02:08:59.037726   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.037735   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:59.037742   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:59.037807   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:59.068083   62745 cri.go:89] found id: ""
	I1026 02:08:59.068105   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.068112   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:59.068118   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:59.068164   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:59.098128   62745 cri.go:89] found id: ""
	I1026 02:08:59.098158   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.098168   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:59.098179   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:59.098195   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:59.149525   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:59.149556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:59.170062   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:59.170092   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:08:56.781012   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:08:58.781277   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:00.129292   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:02.130411   62379 pod_ready.go:103] pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:02.629396   62379 pod_ready.go:82] duration metric: took 4m0.006244587s for pod "metrics-server-6867b74b74-c9cwx" in "kube-system" namespace to be "Ready" ...
	E1026 02:09:02.629441   62379 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1026 02:09:02.629454   62379 pod_ready.go:39] duration metric: took 4m5.551545507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:09:02.629473   62379 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:09:02.629506   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:02.629564   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:02.672994   62379 cri.go:89] found id: "04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:02.673015   62379 cri.go:89] found id: ""
	I1026 02:09:02.673025   62379 logs.go:282] 1 containers: [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546]
	I1026 02:09:02.673082   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.677308   62379 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:02.677364   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:02.713066   62379 cri.go:89] found id: "3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:02.713093   62379 cri.go:89] found id: ""
	I1026 02:09:02.713103   62379 logs.go:282] 1 containers: [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d]
	I1026 02:09:02.713160   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.717003   62379 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:02.717077   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:02.752908   62379 cri.go:89] found id: "ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:02.752943   62379 cri.go:89] found id: ""
	I1026 02:09:02.752952   62379 logs.go:282] 1 containers: [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237]
	I1026 02:09:02.753007   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.757456   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:02.757520   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:02.793298   62379 cri.go:89] found id: "4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:02.793325   62379 cri.go:89] found id: ""
	I1026 02:09:02.793334   62379 logs.go:282] 1 containers: [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c]
	I1026 02:09:02.793398   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.797235   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:02.797303   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:02.830394   62379 cri.go:89] found id: "8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:02.830426   62379 cri.go:89] found id: ""
	I1026 02:09:02.830436   62379 logs.go:282] 1 containers: [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b]
	I1026 02:09:02.830495   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.834371   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:02.834432   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:02.870510   62379 cri.go:89] found id: "63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:02.870541   62379 cri.go:89] found id: ""
	I1026 02:09:02.870551   62379 logs.go:282] 1 containers: [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa]
	I1026 02:09:02.870608   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.874599   62379 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:02.874689   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:02.909554   62379 cri.go:89] found id: ""
	I1026 02:09:02.909584   62379 logs.go:282] 0 containers: []
	W1026 02:09:02.909595   62379 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:02.909603   62379 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:02.909667   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:02.944929   62379 cri.go:89] found id: "971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:02.944955   62379 cri.go:89] found id: "ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:02.944960   62379 cri.go:89] found id: ""
	I1026 02:09:02.944971   62379 logs.go:282] 2 containers: [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72]
	I1026 02:09:02.945020   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.949034   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:02.952534   62379 logs.go:123] Gathering logs for kube-apiserver [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546] ...
	I1026 02:09:02.952559   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:03.000272   62379 logs.go:123] Gathering logs for kube-controller-manager [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa] ...
	I1026 02:09:03.000304   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:03.051886   62379 logs.go:123] Gathering logs for storage-provisioner [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37] ...
	I1026 02:09:03.051918   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:03.088935   62379 logs.go:123] Gathering logs for storage-provisioner [ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72] ...
	I1026 02:09:03.088971   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:03.123010   62379 logs.go:123] Gathering logs for kube-proxy [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b] ...
	I1026 02:09:03.123034   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:03.158130   62379 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:03.158158   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1026 02:08:59.274024   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:59.274047   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:59.274063   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:59.347546   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:59.347579   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:01.882822   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:01.896765   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:01.896832   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:01.934973   62745 cri.go:89] found id: ""
	I1026 02:09:01.935002   62745 logs.go:282] 0 containers: []
	W1026 02:09:01.935010   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:09:01.935016   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:01.935069   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:01.972272   62745 cri.go:89] found id: ""
	I1026 02:09:01.972299   62745 logs.go:282] 0 containers: []
	W1026 02:09:01.972307   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:09:01.972312   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:01.972364   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:02.007986   62745 cri.go:89] found id: ""
	I1026 02:09:02.008015   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.008026   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:09:02.008035   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:02.008100   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:02.041798   62745 cri.go:89] found id: ""
	I1026 02:09:02.041827   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.041837   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:09:02.041845   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:02.041912   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:02.077088   62745 cri.go:89] found id: ""
	I1026 02:09:02.077116   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.077123   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:09:02.077129   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:02.077180   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:02.114603   62745 cri.go:89] found id: ""
	I1026 02:09:02.114630   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.114638   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:09:02.114645   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:02.114705   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:02.149124   62745 cri.go:89] found id: ""
	I1026 02:09:02.149153   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.149165   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:02.149172   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:09:02.149236   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:09:02.183885   62745 cri.go:89] found id: ""
	I1026 02:09:02.183916   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.183927   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:09:02.183937   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:02.183950   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:02.266206   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:09:02.266245   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:02.305679   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:02.305711   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:02.355932   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:02.355972   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:02.369288   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:02.369316   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:09:02.433916   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:09:01.280471   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:03.280544   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:03.629453   62379 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:03.629495   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:03.696362   62379 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:03.696396   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:03.712852   62379 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:03.712876   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:03.827614   62379 logs.go:123] Gathering logs for etcd [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d] ...
	I1026 02:09:03.827640   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:03.876130   62379 logs.go:123] Gathering logs for coredns [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237] ...
	I1026 02:09:03.876160   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:03.909399   62379 logs.go:123] Gathering logs for kube-scheduler [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c] ...
	I1026 02:09:03.909441   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:03.942214   62379 logs.go:123] Gathering logs for container status ...
	I1026 02:09:03.942240   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:06.479789   62379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:06.496699   62379 api_server.go:72] duration metric: took 4m16.170861339s to wait for apiserver process to appear ...
	I1026 02:09:06.496733   62379 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:09:06.496775   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:06.496837   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:06.533373   62379 cri.go:89] found id: "04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:06.533400   62379 cri.go:89] found id: ""
	I1026 02:09:06.533411   62379 logs.go:282] 1 containers: [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546]
	I1026 02:09:06.533482   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.537278   62379 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:06.537337   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:06.571159   62379 cri.go:89] found id: "3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:06.571179   62379 cri.go:89] found id: ""
	I1026 02:09:06.571188   62379 logs.go:282] 1 containers: [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d]
	I1026 02:09:06.571241   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.575467   62379 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:06.575525   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:06.609535   62379 cri.go:89] found id: "ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:06.609553   62379 cri.go:89] found id: ""
	I1026 02:09:06.609560   62379 logs.go:282] 1 containers: [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237]
	I1026 02:09:06.609605   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.613338   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:06.613387   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:06.650535   62379 cri.go:89] found id: "4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:06.650554   62379 cri.go:89] found id: ""
	I1026 02:09:06.650560   62379 logs.go:282] 1 containers: [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c]
	I1026 02:09:06.650609   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.654502   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:06.654568   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:06.692996   62379 cri.go:89] found id: "8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:06.693026   62379 cri.go:89] found id: ""
	I1026 02:09:06.693036   62379 logs.go:282] 1 containers: [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b]
	I1026 02:09:06.693092   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.696994   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:06.697056   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:06.735652   62379 cri.go:89] found id: "63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:06.735675   62379 cri.go:89] found id: ""
	I1026 02:09:06.735684   62379 logs.go:282] 1 containers: [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa]
	I1026 02:09:06.735744   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.739498   62379 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:06.739558   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:06.784332   62379 cri.go:89] found id: ""
	I1026 02:09:06.784356   62379 logs.go:282] 0 containers: []
	W1026 02:09:06.784366   62379 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:06.784373   62379 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:06.784431   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:06.822577   62379 cri.go:89] found id: "971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:06.822596   62379 cri.go:89] found id: "ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:06.822600   62379 cri.go:89] found id: ""
	I1026 02:09:06.822606   62379 logs.go:282] 2 containers: [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72]
	I1026 02:09:06.822650   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.826670   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:06.830256   62379 logs.go:123] Gathering logs for container status ...
	I1026 02:09:06.830277   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:06.869077   62379 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:06.869107   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:06.934843   62379 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:06.934878   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:07.037743   62379 logs.go:123] Gathering logs for kube-proxy [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b] ...
	I1026 02:09:07.037770   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:07.072942   62379 logs.go:123] Gathering logs for storage-provisioner [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37] ...
	I1026 02:09:07.072988   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:07.107693   62379 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:07.107721   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:07.546502   62379 logs.go:123] Gathering logs for kube-controller-manager [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa] ...
	I1026 02:09:07.546549   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:07.596627   62379 logs.go:123] Gathering logs for storage-provisioner [ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72] ...
	I1026 02:09:07.596662   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:07.628840   62379 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:07.628889   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:07.642610   62379 logs.go:123] Gathering logs for kube-apiserver [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546] ...
	I1026 02:09:07.642638   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:07.688949   62379 logs.go:123] Gathering logs for etcd [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d] ...
	I1026 02:09:07.688994   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:07.730741   62379 logs.go:123] Gathering logs for coredns [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237] ...
	I1026 02:09:07.730772   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:07.764778   62379 logs.go:123] Gathering logs for kube-scheduler [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c] ...
	I1026 02:09:07.764809   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:04.935049   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:04.953402   62745 kubeadm.go:597] duration metric: took 4m3.741693828s to restartPrimaryControlPlane
	W1026 02:09:04.953503   62745 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 02:09:04.953540   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:09:05.280663   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:07.282778   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:10.050421   62745 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096859319s)
	I1026 02:09:10.050506   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:09:10.065231   62745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:09:10.075554   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:09:10.085543   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:09:10.085565   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:09:10.085631   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:09:10.094991   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:09:10.095054   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:09:10.104635   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:09:10.113803   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:09:10.113864   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:09:10.123460   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:09:10.132411   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:09:10.132472   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:09:10.141558   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:09:10.150054   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:09:10.150111   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:09:10.161808   62745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:09:10.231369   62745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 02:09:10.231494   62745 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:09:10.394653   62745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:09:10.394842   62745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:09:10.394994   62745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 02:09:10.583351   62745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:09:10.585369   62745 out.go:235]   - Generating certificates and keys ...
	I1026 02:09:10.585500   62745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:09:10.585590   62745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:09:10.585697   62745 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:09:10.585791   62745 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:09:10.585898   62745 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:09:10.585980   62745 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:09:10.586195   62745 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:09:10.586557   62745 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:09:10.586950   62745 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:09:10.587291   62745 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:09:10.587415   62745 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:09:10.587504   62745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:09:10.860465   62745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:09:11.279436   62745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:09:11.406209   62745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:09:11.681643   62745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:09:11.696371   62745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:09:11.697571   62745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:09:11.697642   62745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:09:11.833212   62745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:09:10.300505   62379 api_server.go:253] Checking apiserver healthz at https://192.168.61.84:8443/healthz ...
	I1026 02:09:10.307044   62379 api_server.go:279] https://192.168.61.84:8443/healthz returned 200:
	ok
	I1026 02:09:10.307950   62379 api_server.go:141] control plane version: v1.31.2
	I1026 02:09:10.307972   62379 api_server.go:131] duration metric: took 3.81123162s to wait for apiserver health ...
	I1026 02:09:10.307979   62379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:09:10.308000   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:10.308051   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:10.350830   62379 cri.go:89] found id: "04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:10.350860   62379 cri.go:89] found id: ""
	I1026 02:09:10.350869   62379 logs.go:282] 1 containers: [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546]
	I1026 02:09:10.350938   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.356194   62379 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:10.356266   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:10.399054   62379 cri.go:89] found id: "3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:10.399079   62379 cri.go:89] found id: ""
	I1026 02:09:10.399088   62379 logs.go:282] 1 containers: [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d]
	I1026 02:09:10.399146   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.403794   62379 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:10.403857   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:10.449016   62379 cri.go:89] found id: "ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:10.449042   62379 cri.go:89] found id: ""
	I1026 02:09:10.449052   62379 logs.go:282] 1 containers: [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237]
	I1026 02:09:10.449109   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.452964   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:10.453030   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:10.494363   62379 cri.go:89] found id: "4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:10.494386   62379 cri.go:89] found id: ""
	I1026 02:09:10.494396   62379 logs.go:282] 1 containers: [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c]
	I1026 02:09:10.494452   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.498679   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:10.498750   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:10.539465   62379 cri.go:89] found id: "8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:10.539489   62379 cri.go:89] found id: ""
	I1026 02:09:10.539496   62379 logs.go:282] 1 containers: [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b]
	I1026 02:09:10.539541   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.543476   62379 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:10.543544   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:10.584468   62379 cri.go:89] found id: "63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:10.584490   62379 cri.go:89] found id: ""
	I1026 02:09:10.584500   62379 logs.go:282] 1 containers: [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa]
	I1026 02:09:10.584574   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.590365   62379 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:10.590430   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:10.628699   62379 cri.go:89] found id: ""
	I1026 02:09:10.628729   62379 logs.go:282] 0 containers: []
	W1026 02:09:10.628738   62379 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:10.628743   62379 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:10.628792   62379 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:10.668577   62379 cri.go:89] found id: "971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:10.668602   62379 cri.go:89] found id: "ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:10.668607   62379 cri.go:89] found id: ""
	I1026 02:09:10.668616   62379 logs.go:282] 2 containers: [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72]
	I1026 02:09:10.668672   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.672713   62379 ssh_runner.go:195] Run: which crictl
	I1026 02:09:10.676475   62379 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:10.676495   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:10.689159   62379 logs.go:123] Gathering logs for coredns [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237] ...
	I1026 02:09:10.689186   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237"
	I1026 02:09:10.724661   62379 logs.go:123] Gathering logs for kube-proxy [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b] ...
	I1026 02:09:10.724687   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b"
	I1026 02:09:10.767487   62379 logs.go:123] Gathering logs for kube-controller-manager [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa] ...
	I1026 02:09:10.767517   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa"
	I1026 02:09:10.825273   62379 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:10.825305   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:11.221508   62379 logs.go:123] Gathering logs for container status ...
	I1026 02:09:11.221550   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:11.281505   62379 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:11.281535   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:11.353648   62379 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:11.353701   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:11.466274   62379 logs.go:123] Gathering logs for kube-apiserver [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546] ...
	I1026 02:09:11.466304   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546"
	I1026 02:09:11.513805   62379 logs.go:123] Gathering logs for etcd [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d] ...
	I1026 02:09:11.513835   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d"
	I1026 02:09:11.554633   62379 logs.go:123] Gathering logs for kube-scheduler [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c] ...
	I1026 02:09:11.554665   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c"
	I1026 02:09:11.595991   62379 logs.go:123] Gathering logs for storage-provisioner [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37] ...
	I1026 02:09:11.596022   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37"
	I1026 02:09:11.630514   62379 logs.go:123] Gathering logs for storage-provisioner [ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72] ...
	I1026 02:09:11.630539   62379 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72"
	I1026 02:09:11.834981   62745 out.go:235]   - Booting up control plane ...
	I1026 02:09:11.835117   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:09:11.840834   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:09:11.843456   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:09:11.843554   62745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:09:11.846464   62745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 02:09:14.186847   62379 system_pods.go:59] 8 kube-system pods found
	I1026 02:09:14.186887   62379 system_pods.go:61] "coredns-7c65d6cfc9-cs6fv" [05855bd2-58d5-4d83-b5b4-6b7d28b13957] Running
	I1026 02:09:14.186904   62379 system_pods.go:61] "etcd-embed-certs-767480" [4051ced7-363a-45fd-be21-ff185f16e2f8] Running
	I1026 02:09:14.186911   62379 system_pods.go:61] "kube-apiserver-embed-certs-767480" [04a9ea55-a86f-43b0-a784-0ea9418514c9] Running
	I1026 02:09:14.186916   62379 system_pods.go:61] "kube-controller-manager-embed-certs-767480" [c90949e8-8094-4535-8b16-5836fb6a6d41] Running
	I1026 02:09:14.186922   62379 system_pods.go:61] "kube-proxy-nlwh5" [e83fffc8-a912-4919-b5f6-ccc2745bf855] Running
	I1026 02:09:14.186927   62379 system_pods.go:61] "kube-scheduler-embed-certs-767480" [24749997-d237-4b45-9e45-609bac5f350c] Running
	I1026 02:09:14.186933   62379 system_pods.go:61] "metrics-server-6867b74b74-c9cwx" [62a837f0-6fdb-418e-a5dd-e3196bb51346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:09:14.186937   62379 system_pods.go:61] "storage-provisioner" [e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824] Running
	I1026 02:09:14.186944   62379 system_pods.go:74] duration metric: took 3.878959932s to wait for pod list to return data ...
	I1026 02:09:14.186951   62379 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:09:14.189491   62379 default_sa.go:45] found service account: "default"
	I1026 02:09:14.189515   62379 default_sa.go:55] duration metric: took 2.557846ms for default service account to be created ...
	I1026 02:09:14.189526   62379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:09:14.195532   62379 system_pods.go:86] 8 kube-system pods found
	I1026 02:09:14.195563   62379 system_pods.go:89] "coredns-7c65d6cfc9-cs6fv" [05855bd2-58d5-4d83-b5b4-6b7d28b13957] Running
	I1026 02:09:14.195571   62379 system_pods.go:89] "etcd-embed-certs-767480" [4051ced7-363a-45fd-be21-ff185f16e2f8] Running
	I1026 02:09:14.195578   62379 system_pods.go:89] "kube-apiserver-embed-certs-767480" [04a9ea55-a86f-43b0-a784-0ea9418514c9] Running
	I1026 02:09:14.195584   62379 system_pods.go:89] "kube-controller-manager-embed-certs-767480" [c90949e8-8094-4535-8b16-5836fb6a6d41] Running
	I1026 02:09:14.195589   62379 system_pods.go:89] "kube-proxy-nlwh5" [e83fffc8-a912-4919-b5f6-ccc2745bf855] Running
	I1026 02:09:14.195594   62379 system_pods.go:89] "kube-scheduler-embed-certs-767480" [24749997-d237-4b45-9e45-609bac5f350c] Running
	I1026 02:09:14.195604   62379 system_pods.go:89] "metrics-server-6867b74b74-c9cwx" [62a837f0-6fdb-418e-a5dd-e3196bb51346] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:09:14.195611   62379 system_pods.go:89] "storage-provisioner" [e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824] Running
	I1026 02:09:14.195621   62379 system_pods.go:126] duration metric: took 6.087465ms to wait for k8s-apps to be running ...
	I1026 02:09:14.195629   62379 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:09:14.195680   62379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:09:14.211261   62379 system_svc.go:56] duration metric: took 15.622509ms WaitForService to wait for kubelet
	I1026 02:09:14.211290   62379 kubeadm.go:582] duration metric: took 4m23.88545626s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:09:14.211311   62379 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:09:14.214306   62379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:09:14.214334   62379 node_conditions.go:123] node cpu capacity is 2
	I1026 02:09:14.214356   62379 node_conditions.go:105] duration metric: took 3.036732ms to run NodePressure ...
	I1026 02:09:14.214375   62379 start.go:241] waiting for startup goroutines ...
	I1026 02:09:14.214386   62379 start.go:246] waiting for cluster config update ...
	I1026 02:09:14.214400   62379 start.go:255] writing updated cluster config ...
	I1026 02:09:14.214759   62379 ssh_runner.go:195] Run: rm -f paused
	I1026 02:09:14.266848   62379 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:09:14.268822   62379 out.go:177] * Done! kubectl is now configured to use "embed-certs-767480" cluster and "default" namespace by default
	I1026 02:09:09.781910   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:12.281213   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:14.282010   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:16.781113   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:19.280284   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:21.781000   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:24.280174   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:26.281259   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:28.781617   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:31.280193   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:33.280927   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:35.780668   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:38.280047   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:40.280630   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:42.284945   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:44.781305   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:47.279441   62203 pod_ready.go:103] pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace has status "Ready":"False"
	I1026 02:09:48.280250   62203 pod_ready.go:82] duration metric: took 4m0.005908607s for pod "metrics-server-6867b74b74-kwrk2" in "kube-system" namespace to be "Ready" ...
	E1026 02:09:48.280274   62203 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1026 02:09:48.280282   62203 pod_ready.go:39] duration metric: took 4m1.202297063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:09:48.280297   62203 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:09:48.280324   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:48.280377   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:48.322918   62203 cri.go:89] found id: "e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:48.322945   62203 cri.go:89] found id: ""
	I1026 02:09:48.322954   62203 logs.go:282] 1 containers: [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e]
	I1026 02:09:48.323008   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.326973   62203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:48.327027   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:48.363168   62203 cri.go:89] found id: "1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:48.363188   62203 cri.go:89] found id: ""
	I1026 02:09:48.363195   62203 logs.go:282] 1 containers: [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01]
	I1026 02:09:48.363237   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.367458   62203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:48.367524   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:48.402964   62203 cri.go:89] found id: "c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:48.402997   62203 cri.go:89] found id: ""
	I1026 02:09:48.403007   62203 logs.go:282] 1 containers: [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0]
	I1026 02:09:48.403067   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.407067   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:48.407125   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:48.442215   62203 cri.go:89] found id: "ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:48.442238   62203 cri.go:89] found id: ""
	I1026 02:09:48.442245   62203 logs.go:282] 1 containers: [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be]
	I1026 02:09:48.442300   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.445994   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:48.446050   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:48.480420   62203 cri.go:89] found id: "8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:48.480446   62203 cri.go:89] found id: ""
	I1026 02:09:48.480455   62203 logs.go:282] 1 containers: [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff]
	I1026 02:09:48.480517   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.484302   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:48.484358   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:48.523388   62203 cri.go:89] found id: "dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:48.523415   62203 cri.go:89] found id: ""
	I1026 02:09:48.523425   62203 logs.go:282] 1 containers: [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454]
	I1026 02:09:48.523484   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.527265   62203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:48.527328   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:48.560350   62203 cri.go:89] found id: ""
	I1026 02:09:48.560380   62203 logs.go:282] 0 containers: []
	W1026 02:09:48.560391   62203 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:48.560398   62203 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:48.560458   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:48.594076   62203 cri.go:89] found id: "ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:48.594099   62203 cri.go:89] found id: "ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:48.594103   62203 cri.go:89] found id: ""
	I1026 02:09:48.594110   62203 logs.go:282] 2 containers: [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45]
	I1026 02:09:48.594155   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.598018   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:48.601708   62203 logs.go:123] Gathering logs for storage-provisioner [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193] ...
	I1026 02:09:48.601731   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:48.639594   62203 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:48.639626   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:48.712404   62203 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:48.712449   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:48.834546   62203 logs.go:123] Gathering logs for kube-apiserver [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e] ...
	I1026 02:09:48.834573   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:48.882595   62203 logs.go:123] Gathering logs for kube-scheduler [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be] ...
	I1026 02:09:48.882629   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:48.917158   62203 logs.go:123] Gathering logs for kube-proxy [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff] ...
	I1026 02:09:48.917183   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:48.950523   62203 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:48.950553   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:51.847828   62745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 02:09:51.847957   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:09:51.848200   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:09:49.446139   62203 logs.go:123] Gathering logs for container status ...
	I1026 02:09:49.446183   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:49.491656   62203 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:49.491686   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:49.507950   62203 logs.go:123] Gathering logs for etcd [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01] ...
	I1026 02:09:49.507977   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:49.545131   62203 logs.go:123] Gathering logs for coredns [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0] ...
	I1026 02:09:49.545163   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:49.578141   62203 logs.go:123] Gathering logs for kube-controller-manager [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454] ...
	I1026 02:09:49.578169   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:49.631659   62203 logs.go:123] Gathering logs for storage-provisioner [ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45] ...
	I1026 02:09:49.631693   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:52.170659   62203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:52.187546   62203 api_server.go:72] duration metric: took 4m13.347050339s to wait for apiserver process to appear ...
	I1026 02:09:52.187575   62203 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:09:52.187612   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:52.187676   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:52.224792   62203 cri.go:89] found id: "e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:52.224813   62203 cri.go:89] found id: ""
	I1026 02:09:52.224820   62203 logs.go:282] 1 containers: [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e]
	I1026 02:09:52.224871   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.228543   62203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:52.228609   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:52.268086   62203 cri.go:89] found id: "1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:52.268108   62203 cri.go:89] found id: ""
	I1026 02:09:52.268115   62203 logs.go:282] 1 containers: [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01]
	I1026 02:09:52.268158   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.271974   62203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:52.272042   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:52.311978   62203 cri.go:89] found id: "c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:52.312007   62203 cri.go:89] found id: ""
	I1026 02:09:52.312017   62203 logs.go:282] 1 containers: [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0]
	I1026 02:09:52.312069   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.315815   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:52.315866   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:52.357536   62203 cri.go:89] found id: "ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:52.357561   62203 cri.go:89] found id: ""
	I1026 02:09:52.357571   62203 logs.go:282] 1 containers: [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be]
	I1026 02:09:52.357634   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.361434   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:52.361494   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:52.395735   62203 cri.go:89] found id: "8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:52.395756   62203 cri.go:89] found id: ""
	I1026 02:09:52.395763   62203 logs.go:282] 1 containers: [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff]
	I1026 02:09:52.395806   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.399435   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:52.399495   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:52.431351   62203 cri.go:89] found id: "dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:52.431378   62203 cri.go:89] found id: ""
	I1026 02:09:52.431388   62203 logs.go:282] 1 containers: [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454]
	I1026 02:09:52.431447   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.436040   62203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:52.436116   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:52.470535   62203 cri.go:89] found id: ""
	I1026 02:09:52.470561   62203 logs.go:282] 0 containers: []
	W1026 02:09:52.470572   62203 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:52.470580   62203 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:52.470633   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:52.518093   62203 cri.go:89] found id: "ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:52.518117   62203 cri.go:89] found id: "ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:52.518123   62203 cri.go:89] found id: ""
	I1026 02:09:52.518132   62203 logs.go:282] 2 containers: [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45]
	I1026 02:09:52.518191   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.522346   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:52.525904   62203 logs.go:123] Gathering logs for kube-scheduler [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be] ...
	I1026 02:09:52.525931   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:52.558899   62203 logs.go:123] Gathering logs for storage-provisioner [ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45] ...
	I1026 02:09:52.558928   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:52.595745   62203 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:52.595778   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:52.698912   62203 logs.go:123] Gathering logs for etcd [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01] ...
	I1026 02:09:52.698942   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:52.741058   62203 logs.go:123] Gathering logs for coredns [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0] ...
	I1026 02:09:52.741093   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:52.774315   62203 logs.go:123] Gathering logs for kube-proxy [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff] ...
	I1026 02:09:52.774347   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:52.812981   62203 logs.go:123] Gathering logs for kube-controller-manager [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454] ...
	I1026 02:09:52.813008   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:52.865645   62203 logs.go:123] Gathering logs for storage-provisioner [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193] ...
	I1026 02:09:52.865684   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:52.900295   62203 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:52.900323   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:53.313607   62203 logs.go:123] Gathering logs for container status ...
	I1026 02:09:53.313663   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:53.357967   62203 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:53.357996   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:53.425898   62203 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:53.425933   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:53.439398   62203 logs.go:123] Gathering logs for kube-apiserver [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e] ...
	I1026 02:09:53.439426   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:56.848464   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:09:56.848669   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:09:55.988133   62203 api_server.go:253] Checking apiserver healthz at https://192.168.50.9:8443/healthz ...
	I1026 02:09:55.992474   62203 api_server.go:279] https://192.168.50.9:8443/healthz returned 200:
	ok
	I1026 02:09:55.993380   62203 api_server.go:141] control plane version: v1.31.2
	I1026 02:09:55.993399   62203 api_server.go:131] duration metric: took 3.805817486s to wait for apiserver health ...
	I1026 02:09:55.993408   62203 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:09:55.993456   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:55.993512   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:56.029238   62203 cri.go:89] found id: "e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:56.029262   62203 cri.go:89] found id: ""
	I1026 02:09:56.029272   62203 logs.go:282] 1 containers: [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e]
	I1026 02:09:56.029319   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.033078   62203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:56.033133   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:56.069710   62203 cri.go:89] found id: "1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:56.069734   62203 cri.go:89] found id: ""
	I1026 02:09:56.069744   62203 logs.go:282] 1 containers: [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01]
	I1026 02:09:56.069802   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.073681   62203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:56.073740   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:56.115290   62203 cri.go:89] found id: "c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:56.115308   62203 cri.go:89] found id: ""
	I1026 02:09:56.115315   62203 logs.go:282] 1 containers: [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0]
	I1026 02:09:56.115358   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.119881   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:56.119944   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:56.159535   62203 cri.go:89] found id: "ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:56.159561   62203 cri.go:89] found id: ""
	I1026 02:09:56.159570   62203 logs.go:282] 1 containers: [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be]
	I1026 02:09:56.159626   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.163317   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:56.163379   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:56.201336   62203 cri.go:89] found id: "8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:56.201357   62203 cri.go:89] found id: ""
	I1026 02:09:56.201365   62203 logs.go:282] 1 containers: [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff]
	I1026 02:09:56.201437   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.205211   62203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:56.205275   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:56.239476   62203 cri.go:89] found id: "dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:56.239497   62203 cri.go:89] found id: ""
	I1026 02:09:56.239504   62203 logs.go:282] 1 containers: [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454]
	I1026 02:09:56.239552   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.243413   62203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:56.243480   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:56.282694   62203 cri.go:89] found id: ""
	I1026 02:09:56.282725   62203 logs.go:282] 0 containers: []
	W1026 02:09:56.282736   62203 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:56.282744   62203 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:09:56.282805   62203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:09:56.315081   62203 cri.go:89] found id: "ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:56.315109   62203 cri.go:89] found id: "ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:56.315115   62203 cri.go:89] found id: ""
	I1026 02:09:56.315124   62203 logs.go:282] 2 containers: [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45]
	I1026 02:09:56.315183   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.319283   62203 ssh_runner.go:195] Run: which crictl
	I1026 02:09:56.322709   62203 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:56.322730   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:56.335140   62203 logs.go:123] Gathering logs for etcd [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01] ...
	I1026 02:09:56.335160   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01"
	I1026 02:09:56.372113   62203 logs.go:123] Gathering logs for kube-controller-manager [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454] ...
	I1026 02:09:56.372138   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454"
	I1026 02:09:56.421180   62203 logs.go:123] Gathering logs for storage-provisioner [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193] ...
	I1026 02:09:56.421211   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193"
	I1026 02:09:56.454605   62203 logs.go:123] Gathering logs for storage-provisioner [ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45] ...
	I1026 02:09:56.454627   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45"
	I1026 02:09:56.485353   62203 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:56.485377   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:56.844518   62203 logs.go:123] Gathering logs for container status ...
	I1026 02:09:56.844557   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:56.884731   62203 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:56.884762   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:56.952394   62203 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:56.952429   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:09:57.049433   62203 logs.go:123] Gathering logs for kube-apiserver [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e] ...
	I1026 02:09:57.049466   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e"
	I1026 02:09:57.091443   62203 logs.go:123] Gathering logs for coredns [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0] ...
	I1026 02:09:57.091475   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0"
	I1026 02:09:57.124595   62203 logs.go:123] Gathering logs for kube-scheduler [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be] ...
	I1026 02:09:57.124625   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be"
	I1026 02:09:57.159971   62203 logs.go:123] Gathering logs for kube-proxy [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff] ...
	I1026 02:09:57.159997   62203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff"
	I1026 02:09:59.700700   62203 system_pods.go:59] 8 kube-system pods found
	I1026 02:09:59.700727   62203 system_pods.go:61] "coredns-7c65d6cfc9-4bxg2" [6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31] Running
	I1026 02:09:59.700732   62203 system_pods.go:61] "etcd-no-preload-093148" [fdbc9d71-98dc-4808-abdf-19d81b1a58a0] Running
	I1026 02:09:59.700736   62203 system_pods.go:61] "kube-apiserver-no-preload-093148" [b75bc2e9-71d6-4526-ba8e-bca2755ea9e3] Running
	I1026 02:09:59.700740   62203 system_pods.go:61] "kube-controller-manager-no-preload-093148" [4e415184-b1c5-452f-886f-ce654a2d82c1] Running
	I1026 02:09:59.700744   62203 system_pods.go:61] "kube-proxy-z7nrz" [f9041b89-8769-4652-8d39-0982091ffc7c] Running
	I1026 02:09:59.700747   62203 system_pods.go:61] "kube-scheduler-no-preload-093148" [a0a403d6-29bf-48a4-aee4-50e3dc2465b3] Running
	I1026 02:09:59.700752   62203 system_pods.go:61] "metrics-server-6867b74b74-kwrk2" [25c9f457-5112-4b5b-8a28-6cb290f5ebdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:09:59.700757   62203 system_pods.go:61] "storage-provisioner" [e7f5b94f-ba28-42f6-a8bf-1e7ab4248537] Running
	I1026 02:09:59.700766   62203 system_pods.go:74] duration metric: took 3.707351603s to wait for pod list to return data ...
	I1026 02:09:59.700774   62203 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:09:59.704255   62203 default_sa.go:45] found service account: "default"
	I1026 02:09:59.704316   62203 default_sa.go:55] duration metric: took 3.536192ms for default service account to be created ...
	I1026 02:09:59.704324   62203 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:09:59.709011   62203 system_pods.go:86] 8 kube-system pods found
	I1026 02:09:59.709035   62203 system_pods.go:89] "coredns-7c65d6cfc9-4bxg2" [6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31] Running
	I1026 02:09:59.709040   62203 system_pods.go:89] "etcd-no-preload-093148" [fdbc9d71-98dc-4808-abdf-19d81b1a58a0] Running
	I1026 02:09:59.709045   62203 system_pods.go:89] "kube-apiserver-no-preload-093148" [b75bc2e9-71d6-4526-ba8e-bca2755ea9e3] Running
	I1026 02:09:59.709049   62203 system_pods.go:89] "kube-controller-manager-no-preload-093148" [4e415184-b1c5-452f-886f-ce654a2d82c1] Running
	I1026 02:09:59.709053   62203 system_pods.go:89] "kube-proxy-z7nrz" [f9041b89-8769-4652-8d39-0982091ffc7c] Running
	I1026 02:09:59.709056   62203 system_pods.go:89] "kube-scheduler-no-preload-093148" [a0a403d6-29bf-48a4-aee4-50e3dc2465b3] Running
	I1026 02:09:59.709062   62203 system_pods.go:89] "metrics-server-6867b74b74-kwrk2" [25c9f457-5112-4b5b-8a28-6cb290f5ebdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:09:59.709066   62203 system_pods.go:89] "storage-provisioner" [e7f5b94f-ba28-42f6-a8bf-1e7ab4248537] Running
	I1026 02:09:59.709073   62203 system_pods.go:126] duration metric: took 4.743781ms to wait for k8s-apps to be running ...
	I1026 02:09:59.709080   62203 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:09:59.709118   62203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:09:59.723926   62203 system_svc.go:56] duration metric: took 14.838924ms WaitForService to wait for kubelet
	I1026 02:09:59.723954   62203 kubeadm.go:582] duration metric: took 4m20.883463887s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:09:59.723982   62203 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:09:59.727941   62203 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:09:59.727964   62203 node_conditions.go:123] node cpu capacity is 2
	I1026 02:09:59.727976   62203 node_conditions.go:105] duration metric: took 3.988712ms to run NodePressure ...
	I1026 02:09:59.727990   62203 start.go:241] waiting for startup goroutines ...
	I1026 02:09:59.728002   62203 start.go:246] waiting for cluster config update ...
	I1026 02:09:59.728014   62203 start.go:255] writing updated cluster config ...
	I1026 02:09:59.728334   62203 ssh_runner.go:195] Run: rm -f paused
	I1026 02:09:59.774580   62203 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:09:59.776464   62203 out.go:177] * Done! kubectl is now configured to use "no-preload-093148" cluster and "default" namespace by default
	I1026 02:10:06.849190   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:10:06.849488   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:10:26.850376   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:10:26.850598   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:06.852492   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:06.852819   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:06.852842   62745 kubeadm.go:310] 
	I1026 02:11:06.852910   62745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 02:11:06.852968   62745 kubeadm.go:310] 		timed out waiting for the condition
	I1026 02:11:06.852992   62745 kubeadm.go:310] 
	I1026 02:11:06.853048   62745 kubeadm.go:310] 	This error is likely caused by:
	I1026 02:11:06.853094   62745 kubeadm.go:310] 		- The kubelet is not running
	I1026 02:11:06.853225   62745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:11:06.853236   62745 kubeadm.go:310] 
	I1026 02:11:06.853361   62745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:11:06.853441   62745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 02:11:06.853495   62745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 02:11:06.853505   62745 kubeadm.go:310] 
	I1026 02:11:06.853653   62745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:11:06.853784   62745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:11:06.853804   62745 kubeadm.go:310] 
	I1026 02:11:06.853970   62745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 02:11:06.854059   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:11:06.854125   62745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 02:11:06.854224   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:11:06.854250   62745 kubeadm.go:310] 
	I1026 02:11:06.854678   62745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:11:06.854754   62745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 02:11:06.854813   62745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 02:11:06.854943   62745 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 02:11:06.854989   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:11:12.306225   62745 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.451210775s)
	I1026 02:11:12.306315   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:11:12.319822   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:11:12.328677   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:11:12.328703   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:11:12.328749   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:11:12.337470   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:11:12.337528   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:11:12.346110   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:11:12.354217   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:11:12.354268   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:11:12.362806   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:11:12.371067   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:11:12.371119   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:11:12.379886   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:11:12.388326   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:11:12.388390   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:11:12.396637   62745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:11:12.462439   62745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 02:11:12.462496   62745 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:11:12.611392   62745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:11:12.611545   62745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:11:12.611700   62745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 02:11:12.792037   62745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:11:12.793412   62745 out.go:235]   - Generating certificates and keys ...
	I1026 02:11:12.793523   62745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:11:12.793617   62745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:11:12.793756   62745 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:11:12.793840   62745 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:11:12.793948   62745 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:11:12.794019   62745 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:11:12.794117   62745 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:11:12.794214   62745 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:11:12.794327   62745 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:11:12.794393   62745 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:11:12.794427   62745 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:11:12.794482   62745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:11:13.022002   62745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:11:13.257574   62745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:11:13.433187   62745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:11:13.566478   62745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:11:13.582104   62745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:11:13.583267   62745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:11:13.583340   62745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:11:13.736073   62745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:11:13.738713   62745 out.go:235]   - Booting up control plane ...
	I1026 02:11:13.738828   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:11:13.738921   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:11:13.741059   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:11:13.742288   62745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:11:13.747621   62745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 02:11:16.436298   61346 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1026 02:11:16.436424   61346 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 02:11:16.439096   61346 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:11:16.439206   61346 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:11:16.439337   61346 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:11:16.439474   61346 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:11:16.439610   61346 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:11:16.439736   61346 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:11:16.441603   61346 out.go:235]   - Generating certificates and keys ...
	I1026 02:11:16.441687   61346 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:11:16.441743   61346 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:11:16.441823   61346 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:11:16.441896   61346 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:11:16.441986   61346 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:11:16.442065   61346 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:11:16.442150   61346 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:11:16.442235   61346 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:11:16.442358   61346 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:11:16.442472   61346 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:11:16.442535   61346 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:11:16.442603   61346 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:11:16.442677   61346 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:11:16.442765   61346 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:11:16.442873   61346 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:11:16.442969   61346 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:11:16.443047   61346 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:11:16.443144   61346 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:11:16.443235   61346 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:11:16.444711   61346 out.go:235]   - Booting up control plane ...
	I1026 02:11:16.444795   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:11:16.444874   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:11:16.445040   61346 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:11:16.445182   61346 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:11:16.445308   61346 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:11:16.445370   61346 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:11:16.445545   61346 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:11:16.445674   61346 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:11:16.445733   61346 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.979558ms
	I1026 02:11:16.445809   61346 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:11:16.445901   61346 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.001297562s
	I1026 02:11:16.445911   61346 kubeadm.go:310] 
	I1026 02:11:16.445966   61346 kubeadm.go:310] Unfortunately, an error has occurred:
	I1026 02:11:16.445997   61346 kubeadm.go:310] 	context deadline exceeded
	I1026 02:11:16.446003   61346 kubeadm.go:310] 
	I1026 02:11:16.446031   61346 kubeadm.go:310] This error is likely caused by:
	I1026 02:11:16.446063   61346 kubeadm.go:310] 	- The kubelet is not running
	I1026 02:11:16.446175   61346 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:11:16.446187   61346 kubeadm.go:310] 
	I1026 02:11:16.446332   61346 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:11:16.446370   61346 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1026 02:11:16.446396   61346 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1026 02:11:16.446402   61346 kubeadm.go:310] 
	I1026 02:11:16.446533   61346 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:11:16.446610   61346 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:11:16.446697   61346 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1026 02:11:16.446792   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:11:16.446862   61346 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1026 02:11:16.446972   61346 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:11:16.447020   61346 kubeadm.go:394] duration metric: took 12m9.243108785s to StartCluster
	I1026 02:11:16.447071   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:11:16.447131   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:11:16.490959   61346 cri.go:89] found id: "44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54"
	I1026 02:11:16.490985   61346 cri.go:89] found id: ""
	I1026 02:11:16.490995   61346 logs.go:282] 1 containers: [44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54]
	I1026 02:11:16.491056   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.495086   61346 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:11:16.495155   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:11:16.534664   61346 cri.go:89] found id: ""
	I1026 02:11:16.534693   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.534700   61346 logs.go:284] No container was found matching "etcd"
	I1026 02:11:16.534714   61346 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:11:16.534770   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:11:16.570066   61346 cri.go:89] found id: ""
	I1026 02:11:16.570091   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.570099   61346 logs.go:284] No container was found matching "coredns"
	I1026 02:11:16.570104   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:11:16.570157   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:11:16.604894   61346 cri.go:89] found id: "6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8"
	I1026 02:11:16.604920   61346 cri.go:89] found id: ""
	I1026 02:11:16.604927   61346 logs.go:282] 1 containers: [6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8]
	I1026 02:11:16.604983   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.608961   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:11:16.609015   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:11:16.646246   61346 cri.go:89] found id: ""
	I1026 02:11:16.646277   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.646285   61346 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:11:16.646291   61346 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:11:16.646339   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:11:16.678827   61346 cri.go:89] found id: "a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c"
	I1026 02:11:16.678851   61346 cri.go:89] found id: ""
	I1026 02:11:16.678860   61346 logs.go:282] 1 containers: [a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c]
	I1026 02:11:16.678903   61346 ssh_runner.go:195] Run: which crictl
	I1026 02:11:16.682389   61346 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:11:16.682439   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:11:16.713640   61346 cri.go:89] found id: ""
	I1026 02:11:16.713664   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.713672   61346 logs.go:284] No container was found matching "kindnet"
	I1026 02:11:16.713677   61346 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:11:16.713721   61346 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:11:16.750715   61346 cri.go:89] found id: ""
	I1026 02:11:16.750737   61346 logs.go:282] 0 containers: []
	W1026 02:11:16.750745   61346 logs.go:284] No container was found matching "storage-provisioner"
	I1026 02:11:16.750754   61346 logs.go:123] Gathering logs for kubelet ...
	I1026 02:11:16.750765   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:11:16.883624   61346 logs.go:123] Gathering logs for dmesg ...
	I1026 02:11:16.883659   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:11:16.897426   61346 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:11:16.897459   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:11:16.975339   61346 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:11:16.975367   61346 logs.go:123] Gathering logs for kube-apiserver [44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54] ...
	I1026 02:11:16.975382   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54"
	I1026 02:11:17.011746   61346 logs.go:123] Gathering logs for kube-scheduler [6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8] ...
	I1026 02:11:17.011776   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8"
	I1026 02:11:17.091235   61346 logs.go:123] Gathering logs for kube-controller-manager [a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c] ...
	I1026 02:11:17.091279   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c"
	I1026 02:11:17.125678   61346 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:11:17.125710   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:11:17.350817   61346 logs.go:123] Gathering logs for container status ...
	I1026 02:11:17.350852   61346 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1026 02:11:17.395054   61346 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 02:11:17.395128   61346 out.go:270] * 
	W1026 02:11:17.395194   61346 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:11:17.395218   61346 out.go:270] * 
	W1026 02:11:17.396049   61346 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 02:11:17.398833   61346 out.go:201] 
	W1026 02:11:17.399780   61346 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 511.979558ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.001297562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:11:17.399821   61346 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 02:11:17.399850   61346 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 02:11:17.401913   61346 out.go:201] 
	
	
	==> CRI-O <==
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.639382202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908679639358951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42eb2a53-c876-48ca-82a7-88735c121986 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.640114899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82402aee-e7e1-4c9c-acd7-c4ebadedd5eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.640170317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82402aee-e7e1-4c9c-acd7-c4ebadedd5eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.640268270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c,PodSandboxId:4b6ec25f3caf9f612397bc66351e6dfd298423d5050a2aaf8055104861ed5baa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1729908623427289383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f6878c02c356168d8286fe4d911a46,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54,PodSandboxId:f0c208d3d498fcb043ef1185140a72ef3a41b5259cdd883d084bdbdf28629bfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1729908610417122624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f307d03f5ab1b21c66a93d0c1d2592,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8,PodSandboxId:3e94f340cae30fe5b12ac2568743e57807ed7118caefcbd8d08698b55214c30a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908437041246244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aba175443e2543433bcdb489ed7385,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82402aee-e7e1-4c9c-acd7-c4ebadedd5eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.674782758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37c8dea9-45db-49a5-9f37-67ca9aa50e81 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.674902938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37c8dea9-45db-49a5-9f37-67ca9aa50e81 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.676030679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=940fef5b-6bb1-4422-bdab-770bf2b72b24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.676379837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908679676360103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=940fef5b-6bb1-4422-bdab-770bf2b72b24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.676931103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=868cea98-9bac-483d-b919-f3daa5f4b7b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.676981032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=868cea98-9bac-483d-b919-f3daa5f4b7b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.677077969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c,PodSandboxId:4b6ec25f3caf9f612397bc66351e6dfd298423d5050a2aaf8055104861ed5baa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1729908623427289383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f6878c02c356168d8286fe4d911a46,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54,PodSandboxId:f0c208d3d498fcb043ef1185140a72ef3a41b5259cdd883d084bdbdf28629bfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1729908610417122624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f307d03f5ab1b21c66a93d0c1d2592,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8,PodSandboxId:3e94f340cae30fe5b12ac2568743e57807ed7118caefcbd8d08698b55214c30a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908437041246244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aba175443e2543433bcdb489ed7385,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=868cea98-9bac-483d-b919-f3daa5f4b7b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.711016338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd96a493-8633-4568-9b7e-2a76e51a7450 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.711086411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd96a493-8633-4568-9b7e-2a76e51a7450 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.711916478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5978b564-f18d-4316-a214-f43adad2a57f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.712259603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908679712239646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5978b564-f18d-4316-a214-f43adad2a57f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.712698467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ff0a249-453e-4cdc-9af1-d72d710116ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.712746195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ff0a249-453e-4cdc-9af1-d72d710116ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.712860398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c,PodSandboxId:4b6ec25f3caf9f612397bc66351e6dfd298423d5050a2aaf8055104861ed5baa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1729908623427289383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f6878c02c356168d8286fe4d911a46,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54,PodSandboxId:f0c208d3d498fcb043ef1185140a72ef3a41b5259cdd883d084bdbdf28629bfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1729908610417122624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f307d03f5ab1b21c66a93d0c1d2592,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8,PodSandboxId:3e94f340cae30fe5b12ac2568743e57807ed7118caefcbd8d08698b55214c30a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908437041246244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aba175443e2543433bcdb489ed7385,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ff0a249-453e-4cdc-9af1-d72d710116ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.746394964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4ec3eff-b8cd-43bd-9831-74c3fdc9f49f name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.746472291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4ec3eff-b8cd-43bd-9831-74c3fdc9f49f name=/runtime.v1.RuntimeService/Version
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.747653482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dd9122c-b122-43a1-8f8e-aa572ffad88b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.748330105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908679748297153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dd9122c-b122-43a1-8f8e-aa572ffad88b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.749047171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d4f5ef5-b67d-4186-bca3-895839c62041 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.749133353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d4f5ef5-b67d-4186-bca3-895839c62041 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:11:19 kubernetes-upgrade-970804 crio[1882]: time="2024-10-26 02:11:19.749284060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c,PodSandboxId:4b6ec25f3caf9f612397bc66351e6dfd298423d5050a2aaf8055104861ed5baa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1729908623427289383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f6878c02c356168d8286fe4d911a46,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54,PodSandboxId:f0c208d3d498fcb043ef1185140a72ef3a41b5259cdd883d084bdbdf28629bfc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1729908610417122624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f307d03f5ab1b21c66a93d0c1d2592,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8,PodSandboxId:3e94f340cae30fe5b12ac2568743e57807ed7118caefcbd8d08698b55214c30a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908437041246244,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-970804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76aba175443e2543433bcdb489ed7385,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d4f5ef5-b67d-4186-bca3-895839c62041 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3b71eb2723a1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   56 seconds ago       Exited              kube-controller-manager   15                  4b6ec25f3caf9       kube-controller-manager-kubernetes-upgrade-970804
	44d5d41eeb8c8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   About a minute ago   Exited              kube-apiserver            15                  f0c208d3d498f       kube-apiserver-kubernetes-upgrade-970804
	6c6f2c8f97e0b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   4 minutes ago        Running             kube-scheduler            4                   3e94f340cae30       kube-scheduler-kubernetes-upgrade-970804
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.820433] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.061006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060333] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.204991] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.117076] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.271671] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +3.873962] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +1.820286] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.063826] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.537759] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.084987] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.251507] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[  +0.182076] systemd-fstab-generator[1734]: Ignoring "noauto" option for root device
	[  +0.178364] systemd-fstab-generator[1752]: Ignoring "noauto" option for root device
	[  +0.155545] systemd-fstab-generator[1764]: Ignoring "noauto" option for root device
	[  +0.315273] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +2.354587] kauditd_printk_skb: 199 callbacks suppressed
	[Oct26 01:59] systemd-fstab-generator[1963]: Ignoring "noauto" option for root device
	[  +2.346896] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[ +22.394454] kauditd_printk_skb: 75 callbacks suppressed
	[Oct26 02:03] systemd-fstab-generator[6827]: Ignoring "noauto" option for root device
	[ +22.844160] kauditd_printk_skb: 58 callbacks suppressed
	[Oct26 02:07] systemd-fstab-generator[7762]: Ignoring "noauto" option for root device
	[ +22.722909] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 02:11:19 up 14 min,  0 users,  load average: 0.09, 0.12, 0.09
	Linux kubernetes-upgrade-970804 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54] <==
	I1026 02:10:10.581596       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1026 02:10:10.815298       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:10.815561       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1026 02:10:10.815675       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1026 02:10:10.830520       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1026 02:10:10.832890       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1026 02:10:10.832956       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1026 02:10:10.833190       1 instance.go:232] Using reconciler: lease
	W1026 02:10:10.834115       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:11.816713       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:11.817058       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:11.835318       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:13.127153       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:13.263673       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:13.493165       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:15.268928       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:15.371392       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:15.743367       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:18.594088       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:19.706263       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:20.074218       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:24.287787       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:27.190308       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 02:10:27.578130       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1026 02:10:30.834550       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c] <==
	I1026 02:10:23.795452       1 serving.go:386] Generated self-signed cert in-memory
	I1026 02:10:24.047262       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1026 02:10:24.047351       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:10:24.048773       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 02:10:24.048948       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1026 02:10:24.049043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1026 02:10:24.049083       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 02:10:41.841709       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.48:8443/healthz\": dial tcp 192.168.72.48:8443: connect: connection refused"
	
	
	==> kube-scheduler [6c6f2c8f97e0bdc88b8eaac6e1e9e07794bf7243b0b8b397543961c7f35584e8] <==
	E1026 02:10:39.785986       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.72.48:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:42.202529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.48:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:42.202590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.72.48:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:42.560069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.72.48:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:42.560127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.72.48:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:43.122034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.72.48:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:43.122110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.72.48:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:54.463413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.48:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:54.463497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.72.48:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:55.167937       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.72.48:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:55.168027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.72.48:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:57.382012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.72.48:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:57.382116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.72.48:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:58.821034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.72.48:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:58.821103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.72.48:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:10:59.619950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.72.48:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:10:59.620030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.72.48:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:11:00.124981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.72.48:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:11:00.125042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.72.48:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:11:02.051319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.48:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:11:02.051394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.72.48:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:11:13.713155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.48:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:11:13.713261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.72.48:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	W1026 02:11:17.396326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.72.48:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.48:8443: connect: connection refused
	E1026 02:11:17.396467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.72.48:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.72.48:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 26 02:11:05 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:05.856342    7769 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.48:8443: connect: connection refused" node="kubernetes-upgrade-970804"
	Oct 26 02:11:06 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:06.029454    7769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-970804?timeout=10s\": dial tcp 192.168.72.48:8443: connect: connection refused" interval="7s"
	Oct 26 02:11:06 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:06.495778    7769 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908666495331021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:11:06 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:06.495887    7769 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908666495331021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:11:07 kubernetes-upgrade-970804 kubelet[7769]: I1026 02:11:07.406529    7769 scope.go:117] "RemoveContainer" containerID="44d5d41eeb8c8b58abb214424cf349d71a177293d8609c511cdd288d0b070b54"
	Oct 26 02:11:07 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:07.406914    7769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-970804_kube-system(a0f307d03f5ab1b21c66a93d0c1d2592)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-970804" podUID="a0f307d03f5ab1b21c66a93d0c1d2592"
	Oct 26 02:11:09 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:09.415334    7769 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1\" is already in use by b65eae85b441bcbd3db2305c9010985e7e79f4af51a1529bb6c82ed9a3188e41. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="43468a950a89e4fdd1d8321012be7022d2923b2ddb2696e360b46f995bf62284"
	Oct 26 02:11:09 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:09.415605    7769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.72.48:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.72.48:2380 --initial-cluster=kubernetes-upgrade-970804=https://192.168.72.48:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.72.48:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.72.48:2380 --name=kubernetes-upgrade-970804 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/
minikube/certs/etcd/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},
ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-kubernetes-upgrade-970804_kube-system(010c84f8ca6b96fa6474e922217a9c9
3): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1\" is already in use by b65eae85b441bcbd3db2305c9010985e7e79f4af51a1529bb6c82ed9a3188e41. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 26 02:11:09 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:09.416888    7769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-970804_kube-system_010c84f8ca6b96fa6474e922217a9c93_1\\\" is already in use by b65eae85b441bcbd3db2305c9010985e7e79f4af51a1529bb6c82ed9a3188e41. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-970804" podUID="010c84f8ca6b96fa6474e922217a9c93"
	Oct 26 02:11:10 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:10.098563    7769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.72.48:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-970804.1801de8eb7a89e68  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-970804,UID:kubernetes-upgrade-970804,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-970804 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-970804,},FirstTimestamp:2024-10-26 02:07:16.434984552 +0000 UTC m=+0.530116621,LastTimestamp:2024-10-26 02:07:16.434984552 +0000 UTC m=+0.530116621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,R
eportingController:kubelet,ReportingInstance:kubernetes-upgrade-970804,}"
	Oct 26 02:11:12 kubernetes-upgrade-970804 kubelet[7769]: I1026 02:11:12.858898    7769 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-970804"
	Oct 26 02:11:12 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:12.860286    7769 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.48:8443: connect: connection refused" node="kubernetes-upgrade-970804"
	Oct 26 02:11:13 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:13.030797    7769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-970804?timeout=10s\": dial tcp 192.168.72.48:8443: connect: connection refused" interval="7s"
	Oct 26 02:11:13 kubernetes-upgrade-970804 kubelet[7769]: I1026 02:11:13.406584    7769 scope.go:117] "RemoveContainer" containerID="a3b71eb2723a1a3180087cbe3d02d2628dac81fc2ac6d749045b91e1a0cb307c"
	Oct 26 02:11:13 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:13.406944    7769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-970804_kube-system(75f6878c02c356168d8286fe4d911a46)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-970804" podUID="75f6878c02c356168d8286fe4d911a46"
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:16.426697    7769 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:16.498165    7769 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908676497607580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:11:16 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:16.498206    7769 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908676497607580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:11:19 kubernetes-upgrade-970804 kubelet[7769]: I1026 02:11:19.862149    7769 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-970804"
	Oct 26 02:11:19 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:19.862943    7769 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.48:8443: connect: connection refused" node="kubernetes-upgrade-970804"
	Oct 26 02:11:20 kubernetes-upgrade-970804 kubelet[7769]: E1026 02:11:20.031802    7769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-970804?timeout=10s\": dial tcp 192.168.72.48:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-970804 -n kubernetes-upgrade-970804
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-970804 -n kubernetes-upgrade-970804: exit status 2 (224.013492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-970804" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-970804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-970804
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-970804: (1.096283523s)
--- FAIL: TestKubernetesUpgrade (1175.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.512308157s)

                                                
                                                
-- stdout --
	* [old-k8s-version-385716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-385716" primary control-plane node in "old-k8s-version-385716" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:54:38.141255   59122 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:54:38.141378   59122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:54:38.141386   59122 out.go:358] Setting ErrFile to fd 2...
	I1026 01:54:38.141391   59122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:54:38.141594   59122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:54:38.142198   59122 out.go:352] Setting JSON to false
	I1026 01:54:38.143133   59122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5818,"bootTime":1729901860,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:54:38.143228   59122 start.go:139] virtualization: kvm guest
	I1026 01:54:38.145940   59122 out.go:177] * [old-k8s-version-385716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:54:38.147562   59122 notify.go:220] Checking for updates...
	I1026 01:54:38.147629   59122 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:54:38.149191   59122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:54:38.150674   59122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:54:38.152097   59122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:54:38.153583   59122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:54:38.155109   59122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:54:38.157214   59122 config.go:182] Loaded profile config "kubernetes-upgrade-970804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 01:54:38.157367   59122 config.go:182] Loaded profile config "pause-226333": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:54:38.157500   59122 config.go:182] Loaded profile config "stopped-upgrade-300387": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1026 01:54:38.157601   59122 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:54:38.194003   59122 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 01:54:38.195230   59122 start.go:297] selected driver: kvm2
	I1026 01:54:38.195244   59122 start.go:901] validating driver "kvm2" against <nil>
	I1026 01:54:38.195259   59122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:54:38.196034   59122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:54:38.196120   59122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 01:54:38.211086   59122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 01:54:38.211131   59122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 01:54:38.211347   59122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:54:38.211377   59122 cni.go:84] Creating CNI manager for ""
	I1026 01:54:38.211420   59122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:54:38.211425   59122 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 01:54:38.211475   59122 start.go:340] cluster config:
	{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:54:38.211574   59122 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:54:38.213469   59122 out.go:177] * Starting "old-k8s-version-385716" primary control-plane node in "old-k8s-version-385716" cluster
	I1026 01:54:38.214850   59122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 01:54:38.214887   59122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1026 01:54:38.214907   59122 cache.go:56] Caching tarball of preloaded images
	I1026 01:54:38.214990   59122 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 01:54:38.215001   59122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1026 01:54:38.215078   59122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 01:54:38.215096   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json: {Name:mk0962d20f7d28288ae7686250863f94140559a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:54:38.215214   59122 start.go:360] acquireMachinesLock for old-k8s-version-385716: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 01:54:43.645637   59122 start.go:364] duration metric: took 5.430399216s to acquireMachinesLock for "old-k8s-version-385716"
	I1026 01:54:43.645719   59122 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 01:54:43.645813   59122 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 01:54:43.647910   59122 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 01:54:43.648089   59122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:54:43.648158   59122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:54:43.664827   59122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I1026 01:54:43.665196   59122 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:54:43.665807   59122 main.go:141] libmachine: Using API Version  1
	I1026 01:54:43.665831   59122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:54:43.666196   59122 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:54:43.666345   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 01:54:43.666546   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:54:43.666741   59122 start.go:159] libmachine.API.Create for "old-k8s-version-385716" (driver="kvm2")
	I1026 01:54:43.666769   59122 client.go:168] LocalClient.Create starting
	I1026 01:54:43.666795   59122 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 01:54:43.666833   59122 main.go:141] libmachine: Decoding PEM data...
	I1026 01:54:43.666846   59122 main.go:141] libmachine: Parsing certificate...
	I1026 01:54:43.666892   59122 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 01:54:43.666909   59122 main.go:141] libmachine: Decoding PEM data...
	I1026 01:54:43.666919   59122 main.go:141] libmachine: Parsing certificate...
	I1026 01:54:43.666936   59122 main.go:141] libmachine: Running pre-create checks...
	I1026 01:54:43.666944   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .PreCreateCheck
	I1026 01:54:43.667259   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetConfigRaw
	I1026 01:54:43.667655   59122 main.go:141] libmachine: Creating machine...
	I1026 01:54:43.667669   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .Create
	I1026 01:54:43.667809   59122 main.go:141] libmachine: (old-k8s-version-385716) Creating KVM machine...
	I1026 01:54:43.669160   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found existing default KVM network
	I1026 01:54:43.671038   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:43.670880   59178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026a170}
	I1026 01:54:43.671068   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | created network xml: 
	I1026 01:54:43.671085   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | <network>
	I1026 01:54:43.671099   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   <name>mk-old-k8s-version-385716</name>
	I1026 01:54:43.671106   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   <dns enable='no'/>
	I1026 01:54:43.671118   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   
	I1026 01:54:43.671129   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 01:54:43.671141   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |     <dhcp>
	I1026 01:54:43.671166   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 01:54:43.671179   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |     </dhcp>
	I1026 01:54:43.671194   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   </ip>
	I1026 01:54:43.671204   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG |   
	I1026 01:54:43.671214   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | </network>
	I1026 01:54:43.671225   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | 
	I1026 01:54:43.676621   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | trying to create private KVM network mk-old-k8s-version-385716 192.168.39.0/24...
	I1026 01:54:43.751891   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716 ...
	I1026 01:54:43.751920   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | private KVM network mk-old-k8s-version-385716 192.168.39.0/24 created
	I1026 01:54:43.751934   59122 main.go:141] libmachine: (old-k8s-version-385716) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 01:54:43.751947   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:43.751796   59178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:54:43.751970   59122 main.go:141] libmachine: (old-k8s-version-385716) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 01:54:43.996017   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:43.995831   59178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa...
	I1026 01:54:44.233302   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:44.233176   59178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/old-k8s-version-385716.rawdisk...
	I1026 01:54:44.233327   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Writing magic tar header
	I1026 01:54:44.233342   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Writing SSH key tar header
	I1026 01:54:44.233350   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:44.233292   59178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716 ...
	I1026 01:54:44.233462   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716
	I1026 01:54:44.233519   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716 (perms=drwx------)
	I1026 01:54:44.233544   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 01:54:44.233556   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 01:54:44.233580   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:54:44.233595   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 01:54:44.233603   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 01:54:44.233612   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 01:54:44.233620   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home/jenkins
	I1026 01:54:44.233636   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 01:54:44.233667   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 01:54:44.233678   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Checking permissions on dir: /home
	I1026 01:54:44.233697   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Skipping /home - not owner
	I1026 01:54:44.233711   59122 main.go:141] libmachine: (old-k8s-version-385716) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 01:54:44.233721   59122 main.go:141] libmachine: (old-k8s-version-385716) Creating domain...
	I1026 01:54:44.234922   59122 main.go:141] libmachine: (old-k8s-version-385716) define libvirt domain using xml: 
	I1026 01:54:44.234951   59122 main.go:141] libmachine: (old-k8s-version-385716) <domain type='kvm'>
	I1026 01:54:44.234961   59122 main.go:141] libmachine: (old-k8s-version-385716)   <name>old-k8s-version-385716</name>
	I1026 01:54:44.234969   59122 main.go:141] libmachine: (old-k8s-version-385716)   <memory unit='MiB'>2200</memory>
	I1026 01:54:44.234978   59122 main.go:141] libmachine: (old-k8s-version-385716)   <vcpu>2</vcpu>
	I1026 01:54:44.234989   59122 main.go:141] libmachine: (old-k8s-version-385716)   <features>
	I1026 01:54:44.234997   59122 main.go:141] libmachine: (old-k8s-version-385716)     <acpi/>
	I1026 01:54:44.235006   59122 main.go:141] libmachine: (old-k8s-version-385716)     <apic/>
	I1026 01:54:44.235014   59122 main.go:141] libmachine: (old-k8s-version-385716)     <pae/>
	I1026 01:54:44.235033   59122 main.go:141] libmachine: (old-k8s-version-385716)     
	I1026 01:54:44.235045   59122 main.go:141] libmachine: (old-k8s-version-385716)   </features>
	I1026 01:54:44.235060   59122 main.go:141] libmachine: (old-k8s-version-385716)   <cpu mode='host-passthrough'>
	I1026 01:54:44.235092   59122 main.go:141] libmachine: (old-k8s-version-385716)   
	I1026 01:54:44.235115   59122 main.go:141] libmachine: (old-k8s-version-385716)   </cpu>
	I1026 01:54:44.235130   59122 main.go:141] libmachine: (old-k8s-version-385716)   <os>
	I1026 01:54:44.235141   59122 main.go:141] libmachine: (old-k8s-version-385716)     <type>hvm</type>
	I1026 01:54:44.235153   59122 main.go:141] libmachine: (old-k8s-version-385716)     <boot dev='cdrom'/>
	I1026 01:54:44.235163   59122 main.go:141] libmachine: (old-k8s-version-385716)     <boot dev='hd'/>
	I1026 01:54:44.235174   59122 main.go:141] libmachine: (old-k8s-version-385716)     <bootmenu enable='no'/>
	I1026 01:54:44.235184   59122 main.go:141] libmachine: (old-k8s-version-385716)   </os>
	I1026 01:54:44.235195   59122 main.go:141] libmachine: (old-k8s-version-385716)   <devices>
	I1026 01:54:44.235211   59122 main.go:141] libmachine: (old-k8s-version-385716)     <disk type='file' device='cdrom'>
	I1026 01:54:44.235229   59122 main.go:141] libmachine: (old-k8s-version-385716)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/boot2docker.iso'/>
	I1026 01:54:44.235241   59122 main.go:141] libmachine: (old-k8s-version-385716)       <target dev='hdc' bus='scsi'/>
	I1026 01:54:44.235252   59122 main.go:141] libmachine: (old-k8s-version-385716)       <readonly/>
	I1026 01:54:44.235262   59122 main.go:141] libmachine: (old-k8s-version-385716)     </disk>
	I1026 01:54:44.235273   59122 main.go:141] libmachine: (old-k8s-version-385716)     <disk type='file' device='disk'>
	I1026 01:54:44.235293   59122 main.go:141] libmachine: (old-k8s-version-385716)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 01:54:44.235342   59122 main.go:141] libmachine: (old-k8s-version-385716)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/old-k8s-version-385716.rawdisk'/>
	I1026 01:54:44.235363   59122 main.go:141] libmachine: (old-k8s-version-385716)       <target dev='hda' bus='virtio'/>
	I1026 01:54:44.235374   59122 main.go:141] libmachine: (old-k8s-version-385716)     </disk>
	I1026 01:54:44.235388   59122 main.go:141] libmachine: (old-k8s-version-385716)     <interface type='network'>
	I1026 01:54:44.235401   59122 main.go:141] libmachine: (old-k8s-version-385716)       <source network='mk-old-k8s-version-385716'/>
	I1026 01:54:44.235411   59122 main.go:141] libmachine: (old-k8s-version-385716)       <model type='virtio'/>
	I1026 01:54:44.235419   59122 main.go:141] libmachine: (old-k8s-version-385716)     </interface>
	I1026 01:54:44.235429   59122 main.go:141] libmachine: (old-k8s-version-385716)     <interface type='network'>
	I1026 01:54:44.235443   59122 main.go:141] libmachine: (old-k8s-version-385716)       <source network='default'/>
	I1026 01:54:44.235463   59122 main.go:141] libmachine: (old-k8s-version-385716)       <model type='virtio'/>
	I1026 01:54:44.235476   59122 main.go:141] libmachine: (old-k8s-version-385716)     </interface>
	I1026 01:54:44.235495   59122 main.go:141] libmachine: (old-k8s-version-385716)     <serial type='pty'>
	I1026 01:54:44.235508   59122 main.go:141] libmachine: (old-k8s-version-385716)       <target port='0'/>
	I1026 01:54:44.235518   59122 main.go:141] libmachine: (old-k8s-version-385716)     </serial>
	I1026 01:54:44.235530   59122 main.go:141] libmachine: (old-k8s-version-385716)     <console type='pty'>
	I1026 01:54:44.235546   59122 main.go:141] libmachine: (old-k8s-version-385716)       <target type='serial' port='0'/>
	I1026 01:54:44.235558   59122 main.go:141] libmachine: (old-k8s-version-385716)     </console>
	I1026 01:54:44.235566   59122 main.go:141] libmachine: (old-k8s-version-385716)     <rng model='virtio'>
	I1026 01:54:44.235577   59122 main.go:141] libmachine: (old-k8s-version-385716)       <backend model='random'>/dev/random</backend>
	I1026 01:54:44.235585   59122 main.go:141] libmachine: (old-k8s-version-385716)     </rng>
	I1026 01:54:44.235592   59122 main.go:141] libmachine: (old-k8s-version-385716)     
	I1026 01:54:44.235601   59122 main.go:141] libmachine: (old-k8s-version-385716)     
	I1026 01:54:44.235617   59122 main.go:141] libmachine: (old-k8s-version-385716)   </devices>
	I1026 01:54:44.235629   59122 main.go:141] libmachine: (old-k8s-version-385716) </domain>
	I1026 01:54:44.235640   59122 main.go:141] libmachine: (old-k8s-version-385716) 
	I1026 01:54:44.240815   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:a9:89:55 in network default
	I1026 01:54:44.241675   59122 main.go:141] libmachine: (old-k8s-version-385716) Ensuring networks are active...
	I1026 01:54:44.241713   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:44.242645   59122 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network default is active
	I1026 01:54:44.243135   59122 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network mk-old-k8s-version-385716 is active
	I1026 01:54:44.243831   59122 main.go:141] libmachine: (old-k8s-version-385716) Getting domain xml...
	I1026 01:54:44.244736   59122 main.go:141] libmachine: (old-k8s-version-385716) Creating domain...
	I1026 01:54:45.760626   59122 main.go:141] libmachine: (old-k8s-version-385716) Waiting to get IP...
	I1026 01:54:45.761629   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:45.762206   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:45.762226   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:45.762116   59178 retry.go:31] will retry after 237.458017ms: waiting for machine to come up
	I1026 01:54:46.001822   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:46.002409   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:46.002425   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:46.002343   59178 retry.go:31] will retry after 313.742335ms: waiting for machine to come up
	I1026 01:54:46.317850   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:46.318406   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:46.318437   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:46.318366   59178 retry.go:31] will retry after 339.479459ms: waiting for machine to come up
	I1026 01:54:46.659602   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:46.660008   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:46.660031   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:46.659953   59178 retry.go:31] will retry after 586.325161ms: waiting for machine to come up
	I1026 01:54:47.247891   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:47.248533   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:47.248562   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:47.248499   59178 retry.go:31] will retry after 688.551167ms: waiting for machine to come up
	I1026 01:54:47.938131   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:47.938744   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:47.938773   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:47.938700   59178 retry.go:31] will retry after 770.141383ms: waiting for machine to come up
	I1026 01:54:48.710158   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:48.710603   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:48.710661   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:48.710559   59178 retry.go:31] will retry after 946.418995ms: waiting for machine to come up
	I1026 01:54:49.658705   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:49.659166   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:49.659192   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:49.659115   59178 retry.go:31] will retry after 1.113043385s: waiting for machine to come up
	I1026 01:54:50.773379   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:50.773958   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:50.773989   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:50.773908   59178 retry.go:31] will retry after 1.622216094s: waiting for machine to come up
	I1026 01:54:52.398447   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:52.398902   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:52.398926   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:52.398855   59178 retry.go:31] will retry after 1.824973458s: waiting for machine to come up
	I1026 01:54:54.225242   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:54.225745   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:54.225772   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:54.225689   59178 retry.go:31] will retry after 1.915995621s: waiting for machine to come up
	I1026 01:54:56.143520   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:56.143954   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:56.143982   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:56.143909   59178 retry.go:31] will retry after 2.617102962s: waiting for machine to come up
	I1026 01:54:58.762830   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:54:58.763359   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:54:58.763394   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:54:58.763323   59178 retry.go:31] will retry after 3.698062956s: waiting for machine to come up
	I1026 01:55:02.466183   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:02.466622   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 01:55:02.466690   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 01:55:02.466616   59178 retry.go:31] will retry after 4.092617971s: waiting for machine to come up
	I1026 01:55:06.563174   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.563753   59122 main.go:141] libmachine: (old-k8s-version-385716) Found IP for machine: 192.168.39.33
	I1026 01:55:06.563772   59122 main.go:141] libmachine: (old-k8s-version-385716) Reserving static IP address...
	I1026 01:55:06.563786   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has current primary IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.564228   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-385716", mac: "52:54:00:f3:3d:37", ip: "192.168.39.33"} in network mk-old-k8s-version-385716
	I1026 01:55:06.651247   59122 main.go:141] libmachine: (old-k8s-version-385716) Reserved static IP address: 192.168.39.33
	I1026 01:55:06.651279   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Getting to WaitForSSH function...
	I1026 01:55:06.651287   59122 main.go:141] libmachine: (old-k8s-version-385716) Waiting for SSH to be available...
	I1026 01:55:06.654220   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.654780   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:06.654817   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.655035   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH client type: external
	I1026 01:55:06.655066   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa (-rw-------)
	I1026 01:55:06.655123   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 01:55:06.655152   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | About to run SSH command:
	I1026 01:55:06.655169   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | exit 0
	I1026 01:55:06.789503   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | SSH cmd err, output: <nil>: 
	I1026 01:55:06.789790   59122 main.go:141] libmachine: (old-k8s-version-385716) KVM machine creation complete!
	I1026 01:55:06.790126   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetConfigRaw
	I1026 01:55:06.790679   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:06.790860   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:06.791007   59122 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 01:55:06.791020   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetState
	I1026 01:55:06.792413   59122 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 01:55:06.792431   59122 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 01:55:06.792438   59122 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 01:55:06.792447   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:06.795073   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.795506   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:06.795539   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.795692   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:06.795840   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:06.795971   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:06.796102   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:06.796214   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:06.796436   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:06.796453   59122 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 01:55:06.908913   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:55:06.908942   59122 main.go:141] libmachine: Detecting the provisioner...
	I1026 01:55:06.908955   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:06.912006   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.912412   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:06.912440   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:06.912709   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:06.912881   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:06.913014   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:06.913217   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:06.913370   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:06.913577   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:06.913592   59122 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 01:55:07.026230   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 01:55:07.026311   59122 main.go:141] libmachine: found compatible host: buildroot
	I1026 01:55:07.026320   59122 main.go:141] libmachine: Provisioning with buildroot...
	I1026 01:55:07.026331   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 01:55:07.026584   59122 buildroot.go:166] provisioning hostname "old-k8s-version-385716"
	I1026 01:55:07.026607   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 01:55:07.026763   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.029214   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.029615   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.029643   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.029824   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:07.029993   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.030132   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.030294   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:07.030476   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:07.030724   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:07.030742   59122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-385716 && echo "old-k8s-version-385716" | sudo tee /etc/hostname
	I1026 01:55:07.165629   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-385716
	
	I1026 01:55:07.165686   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.168338   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.168768   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.168792   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.169169   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:07.169357   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.169539   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.169690   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:07.169838   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:07.170067   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:07.170094   59122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-385716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-385716/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-385716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:55:07.304426   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:55:07.304457   59122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 01:55:07.304486   59122 buildroot.go:174] setting up certificates
	I1026 01:55:07.304496   59122 provision.go:84] configureAuth start
	I1026 01:55:07.304505   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 01:55:07.304796   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 01:55:07.307245   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.307754   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.307794   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.307970   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.310747   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.311183   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.311209   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.311336   59122 provision.go:143] copyHostCerts
	I1026 01:55:07.311399   59122 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 01:55:07.311411   59122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 01:55:07.311470   59122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 01:55:07.311646   59122 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 01:55:07.311669   59122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 01:55:07.311715   59122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 01:55:07.311822   59122 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 01:55:07.311832   59122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 01:55:07.311861   59122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 01:55:07.311954   59122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-385716 san=[127.0.0.1 192.168.39.33 localhost minikube old-k8s-version-385716]
	I1026 01:55:07.568505   59122 provision.go:177] copyRemoteCerts
	I1026 01:55:07.568556   59122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:55:07.568578   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.572358   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.572832   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.572854   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.573119   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:07.573359   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.573541   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:07.573699   59122 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 01:55:07.661761   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 01:55:07.686439   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 01:55:07.714432   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 01:55:07.740731   59122 provision.go:87] duration metric: took 436.220212ms to configureAuth
	I1026 01:55:07.740762   59122 buildroot.go:189] setting minikube options for container-runtime
	I1026 01:55:07.740931   59122 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 01:55:07.741000   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.744332   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.744736   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.744767   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.744981   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:07.745231   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.745435   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.745593   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:07.745768   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:07.745977   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:07.745994   59122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 01:55:07.982157   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 01:55:07.982190   59122 main.go:141] libmachine: Checking connection to Docker...
	I1026 01:55:07.982200   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetURL
	I1026 01:55:07.983646   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using libvirt version 6000000
	I1026 01:55:07.986062   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.986418   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.986450   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.986606   59122 main.go:141] libmachine: Docker is up and running!
	I1026 01:55:07.986626   59122 main.go:141] libmachine: Reticulating splines...
	I1026 01:55:07.986634   59122 client.go:171] duration metric: took 24.319855915s to LocalClient.Create
	I1026 01:55:07.986668   59122 start.go:167] duration metric: took 24.319920366s to libmachine.API.Create "old-k8s-version-385716"
	I1026 01:55:07.986682   59122 start.go:293] postStartSetup for "old-k8s-version-385716" (driver="kvm2")
	I1026 01:55:07.986697   59122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:55:07.986724   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:07.986960   59122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:55:07.986984   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:07.989280   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.989628   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:07.989659   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:07.989809   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:07.989996   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:07.990148   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:07.990261   59122 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 01:55:08.076966   59122 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:55:08.081239   59122 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 01:55:08.081272   59122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 01:55:08.081334   59122 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 01:55:08.081485   59122 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 01:55:08.081628   59122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:55:08.091224   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:55:08.117988   59122 start.go:296] duration metric: took 131.289381ms for postStartSetup
	I1026 01:55:08.118042   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetConfigRaw
	I1026 01:55:08.118668   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 01:55:08.121274   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.121718   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:08.121752   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.121964   59122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 01:55:08.122297   59122 start.go:128] duration metric: took 24.476463374s to createHost
	I1026 01:55:08.122331   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:08.125199   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.125563   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:08.125594   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.125708   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:08.125882   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:08.126049   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:08.126182   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:08.126336   59122 main.go:141] libmachine: Using SSH client type: native
	I1026 01:55:08.126527   59122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 01:55:08.126546   59122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 01:55:08.245948   59122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729907708.220936092
	
	I1026 01:55:08.245976   59122 fix.go:216] guest clock: 1729907708.220936092
	I1026 01:55:08.245986   59122 fix.go:229] Guest: 2024-10-26 01:55:08.220936092 +0000 UTC Remote: 2024-10-26 01:55:08.122316669 +0000 UTC m=+30.021850779 (delta=98.619423ms)
	I1026 01:55:08.246035   59122 fix.go:200] guest clock delta is within tolerance: 98.619423ms
	I1026 01:55:08.246044   59122 start.go:83] releasing machines lock for "old-k8s-version-385716", held for 24.600360208s
	I1026 01:55:08.246074   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:08.246353   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 01:55:08.249304   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.422320   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:08.422358   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.422567   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:08.423232   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:08.423442   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 01:55:08.423528   59122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:55:08.423579   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:08.423689   59122 ssh_runner.go:195] Run: cat /version.json
	I1026 01:55:08.423714   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 01:55:08.968070   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.968388   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.968620   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:08.968647   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.968694   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:08.968735   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:08.968827   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:08.968955   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 01:55:08.969040   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:08.969127   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 01:55:08.969201   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:08.969261   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 01:55:08.969318   59122 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 01:55:08.969368   59122 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 01:55:09.079878   59122 ssh_runner.go:195] Run: systemctl --version
	I1026 01:55:09.086007   59122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 01:55:09.244524   59122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 01:55:09.252664   59122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 01:55:09.252731   59122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:55:09.273513   59122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 01:55:09.273541   59122 start.go:495] detecting cgroup driver to use...
	I1026 01:55:09.273632   59122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 01:55:09.290877   59122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 01:55:09.311234   59122 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:55:09.311291   59122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:55:09.330698   59122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:55:09.347709   59122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:55:09.492764   59122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:55:09.669732   59122 docker.go:233] disabling docker service ...
	I1026 01:55:09.669811   59122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:55:09.684024   59122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:55:09.698012   59122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:55:09.824865   59122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:55:09.953049   59122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:55:09.972947   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:55:09.996785   59122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 01:55:09.996865   59122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:55:10.009545   59122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 01:55:10.009617   59122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:55:10.023786   59122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:55:10.036266   59122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 01:55:10.047637   59122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:55:10.062206   59122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:55:10.073874   59122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 01:55:10.073940   59122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 01:55:10.088845   59122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:55:10.101003   59122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:55:10.214940   59122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 01:55:10.305611   59122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 01:55:10.305694   59122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 01:55:10.310486   59122 start.go:563] Will wait 60s for crictl version
	I1026 01:55:10.310546   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:10.314361   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:55:10.353263   59122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 01:55:10.353353   59122 ssh_runner.go:195] Run: crio --version
	I1026 01:55:10.383806   59122 ssh_runner.go:195] Run: crio --version
	I1026 01:55:10.414821   59122 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1026 01:55:10.416135   59122 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 01:55:10.419096   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:10.419449   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 02:54:58 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 01:55:10.419476   59122 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 01:55:10.419721   59122 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 01:55:10.423740   59122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:55:10.436255   59122 kubeadm.go:883] updating cluster {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:55:10.436401   59122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 01:55:10.436466   59122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:55:10.474050   59122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 01:55:10.474125   59122 ssh_runner.go:195] Run: which lz4
	I1026 01:55:10.478090   59122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 01:55:10.481991   59122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 01:55:10.482027   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1026 01:55:12.038235   59122 crio.go:462] duration metric: took 1.560181666s to copy over tarball
	I1026 01:55:12.038323   59122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 01:55:14.732299   59122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693947552s)
	I1026 01:55:14.732337   59122 crio.go:469] duration metric: took 2.694070154s to extract the tarball
	I1026 01:55:14.732371   59122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 01:55:14.776217   59122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:55:14.818050   59122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 01:55:14.818075   59122 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 01:55:14.818139   59122 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:14.818149   59122 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:14.818172   59122 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1026 01:55:14.818182   59122 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:14.818146   59122 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:55:14.818219   59122 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:14.818225   59122 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1026 01:55:14.818473   59122 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:14.819485   59122 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:14.819492   59122 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:14.819509   59122 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1026 01:55:14.819506   59122 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:14.819548   59122 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:55:14.819553   59122 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:14.819556   59122 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1026 01:55:14.819566   59122 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.040960   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1026 01:55:15.057723   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:15.061782   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:15.063984   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:15.066959   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.073945   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:15.077025   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1026 01:55:15.111604   59122 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1026 01:55:15.111658   59122 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1026 01:55:15.111708   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.221438   59122 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1026 01:55:15.221471   59122 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1026 01:55:15.221489   59122 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:15.221506   59122 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:15.221545   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.221562   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.221587   59122 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1026 01:55:15.221616   59122 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:15.221659   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.221672   59122 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1026 01:55:15.221712   59122 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.221749   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.223742   59122 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1026 01:55:15.223768   59122 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:15.223799   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.236093   59122 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1026 01:55:15.236136   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:55:15.236136   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:15.236150   59122 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1026 01:55:15.236151   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:15.236162   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.236136   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:15.236176   59122 ssh_runner.go:195] Run: which crictl
	I1026 01:55:15.236197   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:15.359758   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:55:15.359783   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:15.359841   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:15.359841   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:55:15.366816   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:15.366889   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.366967   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:15.501714   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 01:55:15.501765   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 01:55:15.511758   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:55:15.511874   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 01:55:15.511920   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 01:55:15.511981   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 01:55:15.516242   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 01:55:15.638784   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1026 01:55:15.638836   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1026 01:55:15.638927   59122 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 01:55:15.664060   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1026 01:55:15.664077   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1026 01:55:15.670237   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1026 01:55:15.670272   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1026 01:55:15.688879   59122 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1026 01:55:15.933674   59122 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:55:16.072744   59122 cache_images.go:92] duration metric: took 1.254651703s to LoadCachedImages
	W1026 01:55:16.072840   59122 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1026 01:55:16.072855   59122 kubeadm.go:934] updating node { 192.168.39.33 8443 v1.20.0 crio true true} ...
	I1026 01:55:16.072982   59122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-385716 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:55:16.073072   59122 ssh_runner.go:195] Run: crio config
	I1026 01:55:16.125632   59122 cni.go:84] Creating CNI manager for ""
	I1026 01:55:16.125670   59122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 01:55:16.125684   59122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:55:16.125711   59122 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-385716 NodeName:old-k8s-version-385716 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 01:55:16.125878   59122 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-385716"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:55:16.125958   59122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1026 01:55:16.135757   59122 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:55:16.135840   59122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:55:16.145376   59122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1026 01:55:16.163254   59122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:55:16.181012   59122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1026 01:55:16.197703   59122 ssh_runner.go:195] Run: grep 192.168.39.33	control-plane.minikube.internal$ /etc/hosts
	I1026 01:55:16.202411   59122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:55:16.217790   59122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:55:16.346227   59122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:55:16.362931   59122 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716 for IP: 192.168.39.33
	I1026 01:55:16.362959   59122 certs.go:194] generating shared ca certs ...
	I1026 01:55:16.362980   59122 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.363168   59122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 01:55:16.363226   59122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 01:55:16.363240   59122 certs.go:256] generating profile certs ...
	I1026 01:55:16.363307   59122 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.key
	I1026 01:55:16.363326   59122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt with IP's: []
	I1026 01:55:16.496871   59122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt ...
	I1026 01:55:16.496899   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: {Name:mk2c905e51f60500973b8c057dd985b942f3f419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.497078   59122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.key ...
	I1026 01:55:16.497092   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.key: {Name:mk94645f84c23b79f32c142654d08a0305159f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.497172   59122 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891
	I1026 01:55:16.497189   59122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt.63a78891 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.33]
	I1026 01:55:16.756451   59122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt.63a78891 ...
	I1026 01:55:16.756485   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt.63a78891: {Name:mkab6557744c56edc241c7c420eb0499a7a51240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.756648   59122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891 ...
	I1026 01:55:16.756661   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891: {Name:mk43a41fd43d230ab8e396183004e87cbec2e5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.756748   59122 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt.63a78891 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt
	I1026 01:55:16.756845   59122 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key
	I1026 01:55:16.756901   59122 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key
	I1026 01:55:16.756922   59122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt with IP's: []
	I1026 01:55:16.857739   59122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt ...
	I1026 01:55:16.857766   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt: {Name:mk1b538b4dfedd0feae80127a1da0b511f05983d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.857935   59122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key ...
	I1026 01:55:16.857949   59122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key: {Name:mkb09dd83605ba53145d407962412fef47ebb8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:55:16.858133   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 01:55:16.858170   59122 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 01:55:16.858181   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:55:16.858203   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 01:55:16.858225   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:55:16.858245   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 01:55:16.858281   59122 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 01:55:16.858845   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:55:16.884255   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:55:16.908645   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:55:16.931922   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 01:55:16.955264   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 01:55:16.978265   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:55:17.004538   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:55:17.030207   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:55:17.058663   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 01:55:17.087544   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:55:17.115318   59122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 01:55:17.141285   59122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:55:17.160479   59122 ssh_runner.go:195] Run: openssl version
	I1026 01:55:17.167905   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 01:55:17.194856   59122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 01:55:17.202398   59122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 01:55:17.202473   59122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 01:55:17.208621   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:55:17.222380   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:55:17.242174   59122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:55:17.247458   59122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:55:17.247521   59122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:55:17.257588   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:55:17.269007   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 01:55:17.283638   59122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 01:55:17.288259   59122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 01:55:17.288311   59122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 01:55:17.294132   59122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 01:55:17.305172   59122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:55:17.309136   59122 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:55:17.309198   59122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:55:17.309273   59122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 01:55:17.309339   59122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:55:17.347293   59122 cri.go:89] found id: ""
	I1026 01:55:17.347368   59122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:55:17.357472   59122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:55:17.367408   59122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:55:17.377457   59122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:55:17.377481   59122 kubeadm.go:157] found existing configuration files:
	
	I1026 01:55:17.377525   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:55:17.388305   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:55:17.388372   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:55:17.398429   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:55:17.407101   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:55:17.407177   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:55:17.416252   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:55:17.424909   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:55:17.424972   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:55:17.434035   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:55:17.443573   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:55:17.443627   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:55:17.452806   59122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:55:17.571441   59122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 01:55:17.571525   59122 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:55:17.715729   59122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:55:17.715911   59122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:55:17.716057   59122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:55:17.893515   59122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:55:18.015550   59122 out.go:235]   - Generating certificates and keys ...
	I1026 01:55:18.015692   59122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:55:18.015767   59122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:55:18.065386   59122 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:55:18.155576   59122 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:55:18.264412   59122 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:55:18.467453   59122 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:55:18.601376   59122 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:55:18.601581   59122 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	I1026 01:55:18.767813   59122 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:55:18.768211   59122 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	I1026 01:55:19.023421   59122 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:55:19.224790   59122 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:55:19.425181   59122 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:55:19.425568   59122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:55:19.626937   59122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:55:19.720474   59122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:55:19.938944   59122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:55:20.080688   59122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:55:20.102939   59122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:55:20.106000   59122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:55:20.106202   59122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:55:20.253129   59122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:55:20.254977   59122 out.go:235]   - Booting up control plane ...
	I1026 01:55:20.255181   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:55:20.268634   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:55:20.270433   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:55:20.271298   59122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:55:20.279340   59122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:56:00.273397   59122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 01:56:00.274204   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:00.274467   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:05.274899   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:05.275169   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:15.274276   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:15.274538   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:56:35.274113   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:56:35.274366   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:57:15.275913   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:57:15.276177   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:57:15.276201   59122 kubeadm.go:310] 
	I1026 01:57:15.276267   59122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 01:57:15.276325   59122 kubeadm.go:310] 		timed out waiting for the condition
	I1026 01:57:15.276336   59122 kubeadm.go:310] 
	I1026 01:57:15.276390   59122 kubeadm.go:310] 	This error is likely caused by:
	I1026 01:57:15.276434   59122 kubeadm.go:310] 		- The kubelet is not running
	I1026 01:57:15.276593   59122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 01:57:15.276617   59122 kubeadm.go:310] 
	I1026 01:57:15.276784   59122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 01:57:15.276836   59122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 01:57:15.276889   59122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 01:57:15.276909   59122 kubeadm.go:310] 
	I1026 01:57:15.277064   59122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 01:57:15.277175   59122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 01:57:15.277190   59122 kubeadm.go:310] 
	I1026 01:57:15.277342   59122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 01:57:15.277486   59122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 01:57:15.277620   59122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 01:57:15.277727   59122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 01:57:15.277743   59122 kubeadm.go:310] 
	I1026 01:57:15.278340   59122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:57:15.278477   59122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 01:57:15.278578   59122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 01:57:15.278722   59122 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-385716] and IPs [192.168.39.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 01:57:15.278784   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 01:57:15.765447   59122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:57:15.780917   59122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:57:15.790805   59122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:57:15.790826   59122 kubeadm.go:157] found existing configuration files:
	
	I1026 01:57:15.790875   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:57:15.800337   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:57:15.800407   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:57:15.811128   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:57:15.820082   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:57:15.820150   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:57:15.829189   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:57:15.837888   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:57:15.837960   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:57:15.847080   59122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:57:15.856013   59122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:57:15.856066   59122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:57:15.865234   59122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 01:57:15.945481   59122 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 01:57:15.945567   59122 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:57:16.089075   59122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:57:16.089259   59122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:57:16.089366   59122 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 01:57:16.266938   59122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:57:16.268682   59122 out.go:235]   - Generating certificates and keys ...
	I1026 01:57:16.268789   59122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:57:16.268901   59122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:57:16.269027   59122 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 01:57:16.269114   59122 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 01:57:16.269215   59122 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 01:57:16.269287   59122 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 01:57:16.269392   59122 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 01:57:16.269488   59122 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 01:57:16.269602   59122 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 01:57:16.269712   59122 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 01:57:16.269764   59122 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 01:57:16.269850   59122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:57:16.479403   59122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:57:16.628243   59122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:57:16.698508   59122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:57:16.790562   59122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:57:16.814388   59122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:57:16.815917   59122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:57:16.816011   59122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:57:16.965927   59122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:57:16.967748   59122 out.go:235]   - Booting up control plane ...
	I1026 01:57:16.967884   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:57:16.984010   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:57:16.985977   59122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:57:16.987167   59122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:57:16.990362   59122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 01:57:56.992936   59122 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 01:57:56.993380   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:57:56.993584   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:58:01.994300   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:58:01.994494   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:58:11.995157   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:58:11.995328   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:58:31.994067   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:58:31.994261   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:59:11.993387   59122 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 01:59:11.993647   59122 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 01:59:11.993670   59122 kubeadm.go:310] 
	I1026 01:59:11.993720   59122 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 01:59:11.993775   59122 kubeadm.go:310] 		timed out waiting for the condition
	I1026 01:59:11.993784   59122 kubeadm.go:310] 
	I1026 01:59:11.993831   59122 kubeadm.go:310] 	This error is likely caused by:
	I1026 01:59:11.993869   59122 kubeadm.go:310] 		- The kubelet is not running
	I1026 01:59:11.993955   59122 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 01:59:11.993964   59122 kubeadm.go:310] 
	I1026 01:59:11.994043   59122 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 01:59:11.994077   59122 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 01:59:11.994109   59122 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 01:59:11.994115   59122 kubeadm.go:310] 
	I1026 01:59:11.994198   59122 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 01:59:11.994271   59122 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 01:59:11.994280   59122 kubeadm.go:310] 
	I1026 01:59:11.994380   59122 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 01:59:11.994484   59122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 01:59:11.994601   59122 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 01:59:11.994725   59122 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 01:59:11.994736   59122 kubeadm.go:310] 
	I1026 01:59:11.995627   59122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:59:11.995765   59122 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 01:59:11.995857   59122 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 01:59:11.995924   59122 kubeadm.go:394] duration metric: took 3m54.686731399s to StartCluster
	I1026 01:59:11.995981   59122 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:59:11.996037   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:59:12.040130   59122 cri.go:89] found id: ""
	I1026 01:59:12.040164   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.040172   59122 logs.go:284] No container was found matching "kube-apiserver"
	I1026 01:59:12.040179   59122 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 01:59:12.040240   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:59:12.075288   59122 cri.go:89] found id: ""
	I1026 01:59:12.075321   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.075329   59122 logs.go:284] No container was found matching "etcd"
	I1026 01:59:12.075335   59122 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 01:59:12.075384   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:59:12.112007   59122 cri.go:89] found id: ""
	I1026 01:59:12.112035   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.112043   59122 logs.go:284] No container was found matching "coredns"
	I1026 01:59:12.112049   59122 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:59:12.112095   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:59:12.147721   59122 cri.go:89] found id: ""
	I1026 01:59:12.147761   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.147770   59122 logs.go:284] No container was found matching "kube-scheduler"
	I1026 01:59:12.147776   59122 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:59:12.147831   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:59:12.186081   59122 cri.go:89] found id: ""
	I1026 01:59:12.186116   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.186127   59122 logs.go:284] No container was found matching "kube-proxy"
	I1026 01:59:12.186135   59122 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:59:12.186198   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:59:12.218836   59122 cri.go:89] found id: ""
	I1026 01:59:12.218867   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.218875   59122 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 01:59:12.218881   59122 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 01:59:12.218931   59122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:59:12.250666   59122 cri.go:89] found id: ""
	I1026 01:59:12.250712   59122 logs.go:282] 0 containers: []
	W1026 01:59:12.250724   59122 logs.go:284] No container was found matching "kindnet"
	I1026 01:59:12.250738   59122 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:59:12.250758   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 01:59:12.361282   59122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 01:59:12.361311   59122 logs.go:123] Gathering logs for CRI-O ...
	I1026 01:59:12.361335   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 01:59:12.471058   59122 logs.go:123] Gathering logs for container status ...
	I1026 01:59:12.471103   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 01:59:12.524624   59122 logs.go:123] Gathering logs for kubelet ...
	I1026 01:59:12.524655   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 01:59:12.583131   59122 logs.go:123] Gathering logs for dmesg ...
	I1026 01:59:12.583163   59122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1026 01:59:12.596268   59122 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 01:59:12.596328   59122 out.go:270] * 
	* 
	W1026 01:59:12.596389   59122 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 01:59:12.596407   59122 out.go:270] * 
	* 
	W1026 01:59:12.597313   59122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:59:12.600903   59122 out.go:201] 
	W1026 01:59:12.602095   59122 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 01:59:12.602134   59122 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 01:59:12.602151   59122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 01:59:12.603588   59122 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 6 (218.735686ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:12.867136   61840 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-385716" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-093148 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-093148 --alsologtostderr -v=3: exit status 82 (2m0.530814969s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-093148"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:57:07.761358   61051 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:57:07.761495   61051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:07.761507   61051 out.go:358] Setting ErrFile to fd 2...
	I1026 01:57:07.761513   61051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:07.761739   61051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:57:07.761964   61051 out.go:352] Setting JSON to false
	I1026 01:57:07.762048   61051 mustload.go:65] Loading cluster: no-preload-093148
	I1026 01:57:07.762425   61051 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:07.762510   61051 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/config.json ...
	I1026 01:57:07.762677   61051 mustload.go:65] Loading cluster: no-preload-093148
	I1026 01:57:07.762802   61051 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:07.762848   61051 stop.go:39] StopHost: no-preload-093148
	I1026 01:57:07.763287   61051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:57:07.763343   61051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:57:07.784762   61051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I1026 01:57:07.785261   61051 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:57:07.786609   61051 main.go:141] libmachine: Using API Version  1
	I1026 01:57:07.786637   61051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:57:07.787061   61051 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:57:07.790517   61051 out.go:177] * Stopping node "no-preload-093148"  ...
	I1026 01:57:07.791878   61051 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:57:07.791926   61051 main.go:141] libmachine: (no-preload-093148) Calling .DriverName
	I1026 01:57:07.792193   61051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:57:07.792242   61051 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHHostname
	I1026 01:57:07.796174   61051 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 01:57:07.796642   61051 main.go:141] libmachine: (no-preload-093148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:d1:f6", ip: ""} in network mk-no-preload-093148: {Iface:virbr2 ExpiryTime:2024-10-26 02:55:24 +0000 UTC Type:0 Mac:52:54:00:bc:d1:f6 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:no-preload-093148 Clientid:01:52:54:00:bc:d1:f6}
	I1026 01:57:07.796687   61051 main.go:141] libmachine: (no-preload-093148) DBG | domain no-preload-093148 has defined IP address 192.168.50.9 and MAC address 52:54:00:bc:d1:f6 in network mk-no-preload-093148
	I1026 01:57:07.796867   61051 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHPort
	I1026 01:57:07.797047   61051 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHKeyPath
	I1026 01:57:07.797188   61051 main.go:141] libmachine: (no-preload-093148) Calling .GetSSHUsername
	I1026 01:57:07.797346   61051 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/no-preload-093148/id_rsa Username:docker}
	I1026 01:57:07.930168   61051 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:57:07.992207   61051 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:57:08.042439   61051 main.go:141] libmachine: Stopping "no-preload-093148"...
	I1026 01:57:08.042470   61051 main.go:141] libmachine: (no-preload-093148) Calling .GetState
	I1026 01:57:08.044262   61051 main.go:141] libmachine: (no-preload-093148) Calling .Stop
	I1026 01:57:08.049033   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 0/120
	I1026 01:57:09.051383   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 1/120
	I1026 01:57:10.052609   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 2/120
	I1026 01:57:11.053814   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 3/120
	I1026 01:57:12.055849   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 4/120
	I1026 01:57:13.057863   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 5/120
	I1026 01:57:14.059228   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 6/120
	I1026 01:57:15.060242   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 7/120
	I1026 01:57:16.061829   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 8/120
	I1026 01:57:17.063833   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 9/120
	I1026 01:57:18.066060   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 10/120
	I1026 01:57:19.068006   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 11/120
	I1026 01:57:20.069467   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 12/120
	I1026 01:57:21.071895   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 13/120
	I1026 01:57:22.073463   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 14/120
	I1026 01:57:23.075299   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 15/120
	I1026 01:57:24.076760   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 16/120
	I1026 01:57:25.078117   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 17/120
	I1026 01:57:26.080366   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 18/120
	I1026 01:57:27.081751   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 19/120
	I1026 01:57:28.083747   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 20/120
	I1026 01:57:29.084964   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 21/120
	I1026 01:57:30.086357   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 22/120
	I1026 01:57:31.087883   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 23/120
	I1026 01:57:32.089369   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 24/120
	I1026 01:57:33.091168   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 25/120
	I1026 01:57:34.092348   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 26/120
	I1026 01:57:35.093957   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 27/120
	I1026 01:57:36.096161   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 28/120
	I1026 01:57:37.097733   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 29/120
	I1026 01:57:38.100082   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 30/120
	I1026 01:57:39.101486   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 31/120
	I1026 01:57:40.102978   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 32/120
	I1026 01:57:41.104296   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 33/120
	I1026 01:57:42.105774   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 34/120
	I1026 01:57:43.107747   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 35/120
	I1026 01:57:44.109132   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 36/120
	I1026 01:57:45.110456   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 37/120
	I1026 01:57:46.111916   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 38/120
	I1026 01:57:47.113266   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 39/120
	I1026 01:57:48.115306   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 40/120
	I1026 01:57:49.116659   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 41/120
	I1026 01:57:50.118021   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 42/120
	I1026 01:57:51.119880   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 43/120
	I1026 01:57:52.121186   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 44/120
	I1026 01:57:53.123134   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 45/120
	I1026 01:57:54.124309   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 46/120
	I1026 01:57:55.125749   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 47/120
	I1026 01:57:56.127028   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 48/120
	I1026 01:57:57.128407   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 49/120
	I1026 01:57:58.130712   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 50/120
	I1026 01:57:59.132161   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 51/120
	I1026 01:58:00.133294   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 52/120
	I1026 01:58:01.135002   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 53/120
	I1026 01:58:02.136399   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 54/120
	I1026 01:58:03.138773   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 55/120
	I1026 01:58:04.140361   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 56/120
	I1026 01:58:05.141875   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 57/120
	I1026 01:58:06.143163   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 58/120
	I1026 01:58:07.144409   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 59/120
	I1026 01:58:08.146588   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 60/120
	I1026 01:58:09.147957   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 61/120
	I1026 01:58:10.149374   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 62/120
	I1026 01:58:11.150786   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 63/120
	I1026 01:58:12.152081   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 64/120
	I1026 01:58:13.154069   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 65/120
	I1026 01:58:14.156094   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 66/120
	I1026 01:58:15.157561   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 67/120
	I1026 01:58:16.158888   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 68/120
	I1026 01:58:17.160424   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 69/120
	I1026 01:58:18.161780   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 70/120
	I1026 01:58:19.163841   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 71/120
	I1026 01:58:20.165118   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 72/120
	I1026 01:58:21.166506   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 73/120
	I1026 01:58:22.167783   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 74/120
	I1026 01:58:23.169962   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 75/120
	I1026 01:58:24.171952   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 76/120
	I1026 01:58:25.173201   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 77/120
	I1026 01:58:26.174597   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 78/120
	I1026 01:58:27.175914   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 79/120
	I1026 01:58:28.178043   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 80/120
	I1026 01:58:29.179328   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 81/120
	I1026 01:58:30.180486   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 82/120
	I1026 01:58:31.181758   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 83/120
	I1026 01:58:32.182987   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 84/120
	I1026 01:58:33.184732   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 85/120
	I1026 01:58:34.186017   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 86/120
	I1026 01:58:35.187150   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 87/120
	I1026 01:58:36.188491   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 88/120
	I1026 01:58:37.189602   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 89/120
	I1026 01:58:38.191587   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 90/120
	I1026 01:58:39.192750   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 91/120
	I1026 01:58:40.193991   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 92/120
	I1026 01:58:41.195260   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 93/120
	I1026 01:58:42.196410   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 94/120
	I1026 01:58:43.198434   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 95/120
	I1026 01:58:44.199942   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 96/120
	I1026 01:58:45.201168   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 97/120
	I1026 01:58:46.202598   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 98/120
	I1026 01:58:47.203817   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 99/120
	I1026 01:58:48.206266   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 100/120
	I1026 01:58:49.207509   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 101/120
	I1026 01:58:50.208831   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 102/120
	I1026 01:58:51.210279   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 103/120
	I1026 01:58:52.211606   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 104/120
	I1026 01:58:53.213610   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 105/120
	I1026 01:58:54.214721   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 106/120
	I1026 01:58:55.216144   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 107/120
	I1026 01:58:56.217585   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 108/120
	I1026 01:58:57.218778   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 109/120
	I1026 01:58:58.220939   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 110/120
	I1026 01:58:59.222153   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 111/120
	I1026 01:59:00.223756   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 112/120
	I1026 01:59:01.225073   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 113/120
	I1026 01:59:02.226314   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 114/120
	I1026 01:59:03.228355   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 115/120
	I1026 01:59:04.229767   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 116/120
	I1026 01:59:05.231072   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 117/120
	I1026 01:59:06.232374   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 118/120
	I1026 01:59:07.233732   61051 main.go:141] libmachine: (no-preload-093148) Waiting for machine to stop 119/120
	I1026 01:59:08.234280   61051 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 01:59:08.234325   61051 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1026 01:59:08.236518   61051 out.go:201] 
	W1026 01:59:08.238078   61051 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1026 01:59:08.238096   61051 out.go:270] * 
	* 
	W1026 01:59:08.240678   61051 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:59:08.241890   61051 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-093148 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148: exit status 3 (18.650382527s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:26.893710   61791 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host
	E1026 01:59:26.893732   61791 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-093148" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-767480 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-767480 --alsologtostderr -v=3: exit status 82 (2m0.501265491s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-767480"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:57:27.135294   61246 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:57:27.135418   61246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:27.135427   61246 out.go:358] Setting ErrFile to fd 2...
	I1026 01:57:27.135432   61246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:57:27.135579   61246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:57:27.135864   61246 out.go:352] Setting JSON to false
	I1026 01:57:27.135940   61246 mustload.go:65] Loading cluster: embed-certs-767480
	I1026 01:57:27.136267   61246 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:27.136330   61246 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/embed-certs-767480/config.json ...
	I1026 01:57:27.136491   61246 mustload.go:65] Loading cluster: embed-certs-767480
	I1026 01:57:27.136590   61246 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:57:27.136612   61246 stop.go:39] StopHost: embed-certs-767480
	I1026 01:57:27.136963   61246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:57:27.137005   61246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:57:27.151370   61246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I1026 01:57:27.151914   61246 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:57:27.152613   61246 main.go:141] libmachine: Using API Version  1
	I1026 01:57:27.152640   61246 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:57:27.152948   61246 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:57:27.155134   61246 out.go:177] * Stopping node "embed-certs-767480"  ...
	I1026 01:57:27.156779   61246 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 01:57:27.156824   61246 main.go:141] libmachine: (embed-certs-767480) Calling .DriverName
	I1026 01:57:27.157055   61246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 01:57:27.157083   61246 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHHostname
	I1026 01:57:27.160196   61246 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 01:57:27.160677   61246 main.go:141] libmachine: (embed-certs-767480) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bc:1b", ip: ""} in network mk-embed-certs-767480: {Iface:virbr3 ExpiryTime:2024-10-26 02:56:33 +0000 UTC Type:0 Mac:52:54:00:0d:bc:1b Iaid: IPaddr:192.168.61.84 Prefix:24 Hostname:embed-certs-767480 Clientid:01:52:54:00:0d:bc:1b}
	I1026 01:57:27.160703   61246 main.go:141] libmachine: (embed-certs-767480) DBG | domain embed-certs-767480 has defined IP address 192.168.61.84 and MAC address 52:54:00:0d:bc:1b in network mk-embed-certs-767480
	I1026 01:57:27.160960   61246 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHPort
	I1026 01:57:27.161103   61246 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHKeyPath
	I1026 01:57:27.161267   61246 main.go:141] libmachine: (embed-certs-767480) Calling .GetSSHUsername
	I1026 01:57:27.161403   61246 sshutil.go:53] new ssh client: &{IP:192.168.61.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/embed-certs-767480/id_rsa Username:docker}
	I1026 01:57:27.265107   61246 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 01:57:27.322757   61246 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 01:57:27.383840   61246 main.go:141] libmachine: Stopping "embed-certs-767480"...
	I1026 01:57:27.383892   61246 main.go:141] libmachine: (embed-certs-767480) Calling .GetState
	I1026 01:57:27.385504   61246 main.go:141] libmachine: (embed-certs-767480) Calling .Stop
	I1026 01:57:27.388872   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 0/120
	I1026 01:57:28.390436   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 1/120
	I1026 01:57:29.391593   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 2/120
	I1026 01:57:30.393052   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 3/120
	I1026 01:57:31.394371   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 4/120
	I1026 01:57:32.396442   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 5/120
	I1026 01:57:33.397792   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 6/120
	I1026 01:57:34.400385   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 7/120
	I1026 01:57:35.402046   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 8/120
	I1026 01:57:36.403404   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 9/120
	I1026 01:57:37.405624   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 10/120
	I1026 01:57:38.406884   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 11/120
	I1026 01:57:39.408304   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 12/120
	I1026 01:57:40.409871   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 13/120
	I1026 01:57:41.411110   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 14/120
	I1026 01:57:42.413142   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 15/120
	I1026 01:57:43.414813   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 16/120
	I1026 01:57:44.416176   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 17/120
	I1026 01:57:45.417663   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 18/120
	I1026 01:57:46.419103   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 19/120
	I1026 01:57:47.421315   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 20/120
	I1026 01:57:48.422752   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 21/120
	I1026 01:57:49.424145   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 22/120
	I1026 01:57:50.425496   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 23/120
	I1026 01:57:51.426774   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 24/120
	I1026 01:57:52.428657   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 25/120
	I1026 01:57:53.430119   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 26/120
	I1026 01:57:54.431500   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 27/120
	I1026 01:57:55.432872   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 28/120
	I1026 01:57:56.434285   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 29/120
	I1026 01:57:57.436504   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 30/120
	I1026 01:57:58.437791   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 31/120
	I1026 01:57:59.438907   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 32/120
	I1026 01:58:00.440214   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 33/120
	I1026 01:58:01.441791   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 34/120
	I1026 01:58:02.443595   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 35/120
	I1026 01:58:03.445591   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 36/120
	I1026 01:58:04.446945   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 37/120
	I1026 01:58:05.448463   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 38/120
	I1026 01:58:06.449845   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 39/120
	I1026 01:58:07.452041   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 40/120
	I1026 01:58:08.453350   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 41/120
	I1026 01:58:09.455521   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 42/120
	I1026 01:58:10.457108   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 43/120
	I1026 01:58:11.458474   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 44/120
	I1026 01:58:12.460383   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 45/120
	I1026 01:58:13.462082   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 46/120
	I1026 01:58:14.463499   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 47/120
	I1026 01:58:15.464862   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 48/120
	I1026 01:58:16.466292   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 49/120
	I1026 01:58:17.468410   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 50/120
	I1026 01:58:18.469824   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 51/120
	I1026 01:58:19.471872   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 52/120
	I1026 01:58:20.473287   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 53/120
	I1026 01:58:21.474610   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 54/120
	I1026 01:58:22.476642   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 55/120
	I1026 01:58:23.478101   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 56/120
	I1026 01:58:24.479750   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 57/120
	I1026 01:58:25.481218   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 58/120
	I1026 01:58:26.482479   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 59/120
	I1026 01:58:27.484432   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 60/120
	I1026 01:58:28.485843   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 61/120
	I1026 01:58:29.487935   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 62/120
	I1026 01:58:30.489315   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 63/120
	I1026 01:58:31.490643   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 64/120
	I1026 01:58:32.492644   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 65/120
	I1026 01:58:33.494641   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 66/120
	I1026 01:58:34.495950   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 67/120
	I1026 01:58:35.498212   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 68/120
	I1026 01:58:36.499622   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 69/120
	I1026 01:58:37.501753   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 70/120
	I1026 01:58:38.503930   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 71/120
	I1026 01:58:39.505284   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 72/120
	I1026 01:58:40.506888   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 73/120
	I1026 01:58:41.508355   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 74/120
	I1026 01:58:42.510250   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 75/120
	I1026 01:58:43.511877   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 76/120
	I1026 01:58:44.513009   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 77/120
	I1026 01:58:45.514421   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 78/120
	I1026 01:58:46.515636   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 79/120
	I1026 01:58:47.516717   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 80/120
	I1026 01:58:48.518065   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 81/120
	I1026 01:58:49.519427   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 82/120
	I1026 01:58:50.520816   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 83/120
	I1026 01:58:51.522024   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 84/120
	I1026 01:58:52.523675   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 85/120
	I1026 01:58:53.524954   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 86/120
	I1026 01:58:54.526458   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 87/120
	I1026 01:58:55.527783   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 88/120
	I1026 01:58:56.529446   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 89/120
	I1026 01:58:57.531306   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 90/120
	I1026 01:58:58.532856   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 91/120
	I1026 01:58:59.534238   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 92/120
	I1026 01:59:00.535678   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 93/120
	I1026 01:59:01.537026   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 94/120
	I1026 01:59:02.538931   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 95/120
	I1026 01:59:03.540240   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 96/120
	I1026 01:59:04.541713   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 97/120
	I1026 01:59:05.542939   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 98/120
	I1026 01:59:06.544416   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 99/120
	I1026 01:59:07.546075   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 100/120
	I1026 01:59:08.548017   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 101/120
	I1026 01:59:09.549129   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 102/120
	I1026 01:59:10.550584   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 103/120
	I1026 01:59:11.551874   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 104/120
	I1026 01:59:12.553899   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 105/120
	I1026 01:59:13.555765   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 106/120
	I1026 01:59:14.557047   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 107/120
	I1026 01:59:15.558296   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 108/120
	I1026 01:59:16.559629   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 109/120
	I1026 01:59:17.561794   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 110/120
	I1026 01:59:18.563882   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 111/120
	I1026 01:59:19.565279   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 112/120
	I1026 01:59:20.566756   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 113/120
	I1026 01:59:21.568361   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 114/120
	I1026 01:59:22.570562   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 115/120
	I1026 01:59:23.571884   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 116/120
	I1026 01:59:24.573349   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 117/120
	I1026 01:59:25.574702   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 118/120
	I1026 01:59:26.576098   61246 main.go:141] libmachine: (embed-certs-767480) Waiting for machine to stop 119/120
	I1026 01:59:27.576806   61246 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 01:59:27.576869   61246 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1026 01:59:27.578850   61246 out.go:201] 
	W1026 01:59:27.580335   61246 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1026 01:59:27.580349   61246 out.go:270] * 
	* 
	W1026 01:59:27.582873   61246 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:59:27.584082   61246 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-767480 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480: exit status 3 (18.508144866s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:46.093734   62049 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E1026 01:59:46.093756   62049 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-767480" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-385716 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-385716 create -f testdata/busybox.yaml: exit status 1 (43.092007ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-385716" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-385716 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 6 (222.002997ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:13.130978   61880 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-385716" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 6 (214.120338ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:13.346938   61910 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-385716" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (80.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-385716 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-385716 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m19.994739469s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-385716 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-385716 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-385716 describe deploy/metrics-server -n kube-system: exit status 1 (58.507394ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-385716" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-385716 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 6 (236.427372ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 02:00:33.636155   62595 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-385716" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (80.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148: exit status 3 (3.167769909s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:30.061870   62019 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host
	E1026 01:59:30.061889   62019 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-093148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-093148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151753106s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-093148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148: exit status 3 (3.064098061s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:39.277783   62129 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host
	E1026 01:59:39.277807   62129 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.9:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-093148" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480: exit status 3 (3.168398175s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:49.261800   62260 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E1026 01:59:49.261827   62260 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151828306s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480: exit status 3 (3.065167082s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:59:58.477789   62348 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host
	E1026 01:59:58.477809   62348 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-767480" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (751.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1026 02:01:37.284897   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:03:00.357825   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:03:52.961532   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:06:37.284828   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:08:52.961157   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m30.228017725s)

                                                
                                                
-- stdout --
	* [old-k8s-version-385716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-385716" primary control-plane node in "old-k8s-version-385716" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-385716" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 02:00:39.177522   62745 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:00:39.177661   62745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:00:39.177673   62745 out.go:358] Setting ErrFile to fd 2...
	I1026 02:00:39.177680   62745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:00:39.177953   62745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:00:39.178950   62745 out.go:352] Setting JSON to false
	I1026 02:00:39.180293   62745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6179,"bootTime":1729901860,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:00:39.180391   62745 start.go:139] virtualization: kvm guest
	I1026 02:00:39.182493   62745 out.go:177] * [old-k8s-version-385716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:00:39.183770   62745 notify.go:220] Checking for updates...
	I1026 02:00:39.183773   62745 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:00:39.185074   62745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:00:39.186438   62745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:00:39.187667   62745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:00:39.188764   62745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:00:39.189932   62745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:00:39.191412   62745 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 02:00:39.191785   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:00:39.191842   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:00:39.207286   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I1026 02:00:39.207606   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:00:39.208098   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:00:39.208121   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:00:39.208420   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:00:39.208554   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:00:39.210168   62745 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1026 02:00:39.211253   62745 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:00:39.211530   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:00:39.211570   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:00:39.225940   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I1026 02:00:39.226306   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:00:39.226696   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:00:39.226716   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:00:39.227027   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:00:39.227175   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:00:39.262038   62745 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:00:39.263246   62745 start.go:297] selected driver: kvm2
	I1026 02:00:39.263262   62745 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:00:39.263361   62745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:00:39.264013   62745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:00:39.264089   62745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:00:39.278956   62745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:00:39.279371   62745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:00:39.279401   62745 cni.go:84] Creating CNI manager for ""
	I1026 02:00:39.279448   62745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:00:39.279481   62745 start.go:340] cluster config:
	{Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:00:39.279589   62745 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:00:39.282054   62745 out.go:177] * Starting "old-k8s-version-385716" primary control-plane node in "old-k8s-version-385716" cluster
	I1026 02:00:39.283177   62745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 02:00:39.283204   62745 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1026 02:00:39.283219   62745 cache.go:56] Caching tarball of preloaded images
	I1026 02:00:39.283326   62745 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:00:39.283340   62745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1026 02:00:39.283432   62745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 02:00:39.283602   62745 start.go:360] acquireMachinesLock for old-k8s-version-385716: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:04:34.301944   62745 start.go:364] duration metric: took 3m55.01831188s to acquireMachinesLock for "old-k8s-version-385716"
	I1026 02:04:34.302015   62745 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:04:34.302023   62745 fix.go:54] fixHost starting: 
	I1026 02:04:34.302483   62745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:04:34.302539   62745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:04:34.319621   62745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1026 02:04:34.320093   62745 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:04:34.320633   62745 main.go:141] libmachine: Using API Version  1
	I1026 02:04:34.320663   62745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:04:34.321018   62745 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:04:34.321191   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:34.321343   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetState
	I1026 02:04:34.322823   62745 fix.go:112] recreateIfNeeded on old-k8s-version-385716: state=Stopped err=<nil>
	I1026 02:04:34.322854   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	W1026 02:04:34.323009   62745 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:04:34.324931   62745 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-385716" ...
	I1026 02:04:34.326257   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .Start
	I1026 02:04:34.326406   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring networks are active...
	I1026 02:04:34.327154   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network default is active
	I1026 02:04:34.327468   62745 main.go:141] libmachine: (old-k8s-version-385716) Ensuring network mk-old-k8s-version-385716 is active
	I1026 02:04:34.327843   62745 main.go:141] libmachine: (old-k8s-version-385716) Getting domain xml...
	I1026 02:04:34.328494   62745 main.go:141] libmachine: (old-k8s-version-385716) Creating domain...
	I1026 02:04:35.570715   62745 main.go:141] libmachine: (old-k8s-version-385716) Waiting to get IP...
	I1026 02:04:35.571457   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:35.571935   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:35.572026   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:35.571914   63673 retry.go:31] will retry after 229.540157ms: waiting for machine to come up
	I1026 02:04:35.803476   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:35.803988   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:35.804009   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:35.803941   63673 retry.go:31] will retry after 271.688891ms: waiting for machine to come up
	I1026 02:04:36.077522   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:36.078096   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:36.078125   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:36.078053   63673 retry.go:31] will retry after 374.365537ms: waiting for machine to come up
	I1026 02:04:36.453868   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:36.454427   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:36.454456   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:36.454381   63673 retry.go:31] will retry after 578.001931ms: waiting for machine to come up
	I1026 02:04:37.034042   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:37.034553   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:37.034585   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:37.034473   63673 retry.go:31] will retry after 469.528312ms: waiting for machine to come up
	I1026 02:04:37.505236   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:37.505849   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:37.505885   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:37.505822   63673 retry.go:31] will retry after 826.394258ms: waiting for machine to come up
	I1026 02:04:38.333978   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:38.334380   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:38.334410   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:38.334336   63673 retry.go:31] will retry after 731.652813ms: waiting for machine to come up
	I1026 02:04:39.067272   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:39.067750   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:39.067777   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:39.067697   63673 retry.go:31] will retry after 1.141938018s: waiting for machine to come up
	I1026 02:04:40.211539   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:40.211930   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:40.211987   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:40.211906   63673 retry.go:31] will retry after 1.591834442s: waiting for machine to come up
	I1026 02:04:41.805096   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:41.805608   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:41.805638   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:41.805563   63673 retry.go:31] will retry after 2.248972392s: waiting for machine to come up
	I1026 02:04:44.055913   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:44.056399   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:44.056429   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:44.056350   63673 retry.go:31] will retry after 1.748696748s: waiting for machine to come up
	I1026 02:04:45.806729   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:45.807252   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:45.807282   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:45.807210   63673 retry.go:31] will retry after 2.585377512s: waiting for machine to come up
	I1026 02:04:48.396305   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:48.396788   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | unable to find current IP address of domain old-k8s-version-385716 in network mk-old-k8s-version-385716
	I1026 02:04:48.396822   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | I1026 02:04:48.396742   63673 retry.go:31] will retry after 3.406908475s: waiting for machine to come up
	I1026 02:04:51.806766   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.807223   62745 main.go:141] libmachine: (old-k8s-version-385716) Found IP for machine: 192.168.39.33
	I1026 02:04:51.807244   62745 main.go:141] libmachine: (old-k8s-version-385716) Reserving static IP address...
	I1026 02:04:51.807260   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has current primary IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.807631   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "old-k8s-version-385716", mac: "52:54:00:f3:3d:37", ip: "192.168.39.33"} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.807660   62745 main.go:141] libmachine: (old-k8s-version-385716) Reserved static IP address: 192.168.39.33
	I1026 02:04:51.807682   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | skip adding static IP to network mk-old-k8s-version-385716 - found existing host DHCP lease matching {name: "old-k8s-version-385716", mac: "52:54:00:f3:3d:37", ip: "192.168.39.33"}
	I1026 02:04:51.807702   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Getting to WaitForSSH function...
	I1026 02:04:51.807720   62745 main.go:141] libmachine: (old-k8s-version-385716) Waiting for SSH to be available...
	I1026 02:04:51.809812   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.810208   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.810240   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.810346   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH client type: external
	I1026 02:04:51.810374   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa (-rw-------)
	I1026 02:04:51.810409   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:04:51.810433   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | About to run SSH command:
	I1026 02:04:51.810447   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | exit 0
	I1026 02:04:51.933521   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | SSH cmd err, output: <nil>: 
	I1026 02:04:51.933852   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetConfigRaw
	I1026 02:04:51.934587   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:51.937932   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.938342   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.938376   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.938654   62745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/config.json ...
	I1026 02:04:51.938912   62745 machine.go:93] provisionDockerMachine start ...
	I1026 02:04:51.938936   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:51.939142   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:51.941577   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.941907   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:51.941938   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:51.942101   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:51.942277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:51.942448   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:51.942577   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:51.942738   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:51.942988   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:51.943004   62745 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:04:52.041280   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:04:52.041310   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.041535   62745 buildroot.go:166] provisioning hostname "old-k8s-version-385716"
	I1026 02:04:52.041558   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.041750   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.044276   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.044625   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.044654   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.044794   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.044973   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.045125   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.045249   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.045402   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.045586   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.045601   62745 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-385716 && echo "old-k8s-version-385716" | sudo tee /etc/hostname
	I1026 02:04:52.158916   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-385716
	
	I1026 02:04:52.158952   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.161567   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.161930   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.161957   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.162150   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.162318   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.162443   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.162589   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.162739   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.162921   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.162937   62745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-385716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-385716/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-385716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:04:52.269922   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:04:52.269956   62745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:04:52.269995   62745 buildroot.go:174] setting up certificates
	I1026 02:04:52.270003   62745 provision.go:84] configureAuth start
	I1026 02:04:52.270012   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetMachineName
	I1026 02:04:52.270280   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:52.272938   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.273310   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.273346   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.273510   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.275383   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.275640   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.275672   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.275820   62745 provision.go:143] copyHostCerts
	I1026 02:04:52.275894   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:04:52.275912   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:04:52.275989   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:04:52.276115   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:04:52.276125   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:04:52.276158   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:04:52.276233   62745 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:04:52.276242   62745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:04:52.276269   62745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:04:52.276336   62745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-385716 san=[127.0.0.1 192.168.39.33 localhost minikube old-k8s-version-385716]
	I1026 02:04:52.499439   62745 provision.go:177] copyRemoteCerts
	I1026 02:04:52.499509   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:04:52.499540   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.502255   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.502611   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.502652   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.502822   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.503012   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.503155   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.503272   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:52.587057   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 02:04:52.609360   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:04:52.630632   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:04:52.651916   62745 provision.go:87] duration metric: took 381.902063ms to configureAuth
	I1026 02:04:52.651946   62745 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:04:52.652125   62745 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 02:04:52.652208   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.654847   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.655123   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.655151   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.655334   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.655512   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.655665   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.655839   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.656009   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.656162   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.656177   62745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:04:52.869041   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:04:52.869062   62745 machine.go:96] duration metric: took 930.134589ms to provisionDockerMachine
	I1026 02:04:52.869073   62745 start.go:293] postStartSetup for "old-k8s-version-385716" (driver="kvm2")
	I1026 02:04:52.869086   62745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:04:52.869109   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:52.869393   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:04:52.869430   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.871942   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.872247   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.872274   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.872431   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.872627   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.872791   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.872931   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:52.951357   62745 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:04:52.955344   62745 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:04:52.955365   62745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:04:52.955428   62745 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:04:52.955497   62745 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:04:52.955581   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:04:52.965327   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:04:52.988688   62745 start.go:296] duration metric: took 119.602944ms for postStartSetup
	I1026 02:04:52.988728   62745 fix.go:56] duration metric: took 18.686705472s for fixHost
	I1026 02:04:52.988752   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:52.990958   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.991277   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:52.991305   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:52.991406   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:52.991593   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.991745   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:52.991877   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:52.992029   62745 main.go:141] libmachine: Using SSH client type: native
	I1026 02:04:52.992178   62745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I1026 02:04:52.992187   62745 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:04:53.093645   62745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729908293.069341261
	
	I1026 02:04:53.093666   62745 fix.go:216] guest clock: 1729908293.069341261
	I1026 02:04:53.093676   62745 fix.go:229] Guest: 2024-10-26 02:04:53.069341261 +0000 UTC Remote: 2024-10-26 02:04:52.988733346 +0000 UTC m=+253.848836792 (delta=80.607915ms)
	I1026 02:04:53.093701   62745 fix.go:200] guest clock delta is within tolerance: 80.607915ms
	I1026 02:04:53.093716   62745 start.go:83] releasing machines lock for "old-k8s-version-385716", held for 18.791723963s
	I1026 02:04:53.093747   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.094026   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:53.096804   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.097196   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.097232   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.097353   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.097855   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.098045   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .DriverName
	I1026 02:04:53.098101   62745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:04:53.098154   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:53.098250   62745 ssh_runner.go:195] Run: cat /version.json
	I1026 02:04:53.098277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHHostname
	I1026 02:04:53.100486   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100774   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.100814   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100946   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.100954   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:53.101122   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:53.101277   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:53.101301   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:53.101338   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:53.101445   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:53.101546   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHPort
	I1026 02:04:53.101671   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHKeyPath
	I1026 02:04:53.101812   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetSSHUsername
	I1026 02:04:53.101970   62745 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/old-k8s-version-385716/id_rsa Username:docker}
	I1026 02:04:53.207938   62745 ssh_runner.go:195] Run: systemctl --version
	I1026 02:04:53.213560   62745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:04:53.354252   62745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:04:53.361628   62745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:04:53.361692   62745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:04:53.379919   62745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:04:53.379947   62745 start.go:495] detecting cgroup driver to use...
	I1026 02:04:53.380013   62745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:04:53.394591   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:04:53.407921   62745 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:04:53.407972   62745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:04:53.420732   62745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:04:53.433679   62745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:04:53.543848   62745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:04:53.696256   62745 docker.go:233] disabling docker service ...
	I1026 02:04:53.696335   62745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:04:53.712952   62745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:04:53.726273   62745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:04:53.869139   62745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:04:53.990619   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:04:54.003422   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:04:54.021067   62745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1026 02:04:54.021139   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.030585   62745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:04:54.030662   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.040121   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.049648   62745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:04:54.059293   62745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:04:54.069549   62745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:04:54.078429   62745 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:04:54.078477   62745 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:04:54.091600   62745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:04:54.100699   62745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:04:54.233461   62745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:04:54.319457   62745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:04:54.319533   62745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:04:54.324335   62745 start.go:563] Will wait 60s for crictl version
	I1026 02:04:54.324395   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:54.329603   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:04:54.381910   62745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:04:54.381985   62745 ssh_runner.go:195] Run: crio --version
	I1026 02:04:54.420254   62745 ssh_runner.go:195] Run: crio --version
	I1026 02:04:54.451157   62745 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1026 02:04:54.452507   62745 main.go:141] libmachine: (old-k8s-version-385716) Calling .GetIP
	I1026 02:04:54.455334   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:54.455660   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:3d:37", ip: ""} in network mk-old-k8s-version-385716: {Iface:virbr1 ExpiryTime:2024-10-26 03:04:45 +0000 UTC Type:0 Mac:52:54:00:f3:3d:37 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:old-k8s-version-385716 Clientid:01:52:54:00:f3:3d:37}
	I1026 02:04:54.455685   62745 main.go:141] libmachine: (old-k8s-version-385716) DBG | domain old-k8s-version-385716 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:3d:37 in network mk-old-k8s-version-385716
	I1026 02:04:54.455911   62745 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 02:04:54.459769   62745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:04:54.471699   62745 kubeadm.go:883] updating cluster {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:04:54.471797   62745 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 02:04:54.471843   62745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:54.517960   62745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 02:04:54.518050   62745 ssh_runner.go:195] Run: which lz4
	I1026 02:04:54.522001   62745 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:04:54.525626   62745 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:04:54.525652   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1026 02:04:55.993918   62745 crio.go:462] duration metric: took 1.471949666s to copy over tarball
	I1026 02:04:55.994015   62745 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:04:58.883868   62745 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.889820571s)
	I1026 02:04:58.883901   62745 crio.go:469] duration metric: took 2.88994785s to extract the tarball
	I1026 02:04:58.883911   62745 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:04:58.926928   62745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:04:58.960838   62745 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1026 02:04:58.960869   62745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 02:04:58.960922   62745 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:04:58.960969   62745 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:58.961032   62745 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:58.961068   62745 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:58.961103   62745 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:58.961007   62745 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:58.961048   62745 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:58.961015   62745 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1026 02:04:58.962949   62745 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:58.962965   62745 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:58.962951   62745 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:58.963006   62745 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:58.962967   62745 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:58.963034   62745 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:58.962992   62745 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1026 02:04:58.963042   62745 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:04:59.214479   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.214983   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.217945   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.218962   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1026 02:04:59.227143   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.230137   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.231061   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.359793   62745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1026 02:04:59.359849   62745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.359906   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.359910   62745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1026 02:04:59.359941   62745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.359980   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.395980   62745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1026 02:04:59.396030   62745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.396050   62745 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1026 02:04:59.396066   62745 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1026 02:04:59.396082   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396092   62745 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1026 02:04:59.396095   62745 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.396138   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396168   62745 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1026 02:04:59.396138   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.396197   62745 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.396233   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.399339   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.399382   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.399463   62745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1026 02:04:59.399494   62745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.399530   62745 ssh_runner.go:195] Run: which crictl
	I1026 02:04:59.406867   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.406919   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.406954   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.407187   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.512171   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.512185   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.512171   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.524252   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.524253   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.534571   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.534655   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.638041   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1026 02:04:59.643736   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1026 02:04:59.678053   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1026 02:04:59.678117   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1026 02:04:59.678266   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.703981   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1026 02:04:59.703981   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1026 02:04:59.789073   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1026 02:04:59.789147   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1026 02:04:59.813698   62745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1026 02:04:59.813728   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1026 02:04:59.813746   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1026 02:04:59.822258   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1026 02:04:59.828510   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1026 02:04:59.852264   62745 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1026 02:05:00.143182   62745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:05:00.285257   62745 cache_images.go:92] duration metric: took 1.324368126s to LoadCachedImages
	W1026 02:05:00.285350   62745 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19868-8680/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1026 02:05:00.285367   62745 kubeadm.go:934] updating node { 192.168.39.33 8443 v1.20.0 crio true true} ...
	I1026 02:05:00.285486   62745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-385716 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:05:00.285571   62745 ssh_runner.go:195] Run: crio config
	I1026 02:05:00.335736   62745 cni.go:84] Creating CNI manager for ""
	I1026 02:05:00.335764   62745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:05:00.335779   62745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:05:00.335797   62745 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-385716 NodeName:old-k8s-version-385716 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 02:05:00.335929   62745 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-385716"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:05:00.335988   62745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1026 02:05:00.346410   62745 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:05:00.346490   62745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:05:00.356388   62745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1026 02:05:00.373587   62745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:05:00.389716   62745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1026 02:05:00.406194   62745 ssh_runner.go:195] Run: grep 192.168.39.33	control-plane.minikube.internal$ /etc/hosts
	I1026 02:05:00.409900   62745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:05:00.421876   62745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:05:00.547228   62745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:05:00.563383   62745 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716 for IP: 192.168.39.33
	I1026 02:05:00.563409   62745 certs.go:194] generating shared ca certs ...
	I1026 02:05:00.563429   62745 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:00.563601   62745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:05:00.563657   62745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:05:00.563670   62745 certs.go:256] generating profile certs ...
	I1026 02:05:00.563798   62745 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.key
	I1026 02:05:00.629961   62745 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key.63a78891
	I1026 02:05:00.630065   62745 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key
	I1026 02:05:00.630247   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:05:00.630291   62745 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:05:00.630311   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:05:00.630345   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:05:00.630381   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:05:00.630418   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:05:00.630475   62745 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:05:00.631357   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:05:00.675285   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:05:00.714335   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:05:00.755344   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:05:00.787528   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 02:05:00.826139   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:05:00.851102   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:05:00.875425   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 02:05:00.900226   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:05:00.931632   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:05:00.959203   62745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:05:00.983986   62745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:05:01.000930   62745 ssh_runner.go:195] Run: openssl version
	I1026 02:05:01.007168   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:05:01.018252   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.022960   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.023022   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:05:01.028915   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:05:01.039800   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:05:01.050925   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.055754   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.055809   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:05:01.061382   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:05:01.071996   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:05:01.082621   62745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.087522   62745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.087608   62745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:05:01.093377   62745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:05:01.104331   62745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:05:01.109313   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:05:01.115603   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:05:01.122183   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:05:01.128868   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:05:01.135327   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:05:01.142955   62745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:05:01.151353   62745 kubeadm.go:392] StartCluster: {Name:old-k8s-version-385716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-385716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:05:01.151447   62745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:05:01.151537   62745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:01.200766   62745 cri.go:89] found id: ""
	I1026 02:05:01.200845   62745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:05:01.211671   62745 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:05:01.211697   62745 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:05:01.211760   62745 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:05:01.222114   62745 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:05:01.223151   62745 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-385716" does not appear in /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:05:01.223791   62745 kubeconfig.go:62] /home/jenkins/minikube-integration/19868-8680/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-385716" cluster setting kubeconfig missing "old-k8s-version-385716" context setting]
	I1026 02:05:01.224728   62745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:05:01.289209   62745 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:05:01.300342   62745 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.33
	I1026 02:05:01.300385   62745 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:05:01.300400   62745 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:05:01.300462   62745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:05:01.340462   62745 cri.go:89] found id: ""
	I1026 02:05:01.340538   62745 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:05:01.357940   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:05:01.367863   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:05:01.367885   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:05:01.367940   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:05:01.378121   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:05:01.378189   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:05:01.388445   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:05:01.398096   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:05:01.398170   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:05:01.407914   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:05:01.418110   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:05:01.418177   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:05:01.428678   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:05:01.438749   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:05:01.438850   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:05:01.450759   62745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:05:01.461160   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:01.597114   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.376008   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.620455   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.753408   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:05:02.827566   62745 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:05:02.827662   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:03.327825   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:03.828494   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:04.328718   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:04.828766   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:05.328706   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:05.827729   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:06.327930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:06.828400   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:07.327815   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:07.827702   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:08.327796   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:08.828718   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:09.327723   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:09.828684   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:10.327773   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:10.828577   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:11.328614   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:11.828477   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:12.327916   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:12.828195   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:13.327743   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:13.827732   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:14.327816   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:14.828510   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:15.328470   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:15.827751   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:16.328146   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:16.828497   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:17.328639   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:17.827804   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:18.328601   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:18.827909   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:19.327760   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:19.828058   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:20.328487   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:20.827836   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:21.328618   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:21.828692   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:22.328180   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:22.827698   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:23.328474   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:23.828407   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:24.327803   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:24.828131   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:25.328089   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:25.828080   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:26.327838   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:26.828750   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:27.328352   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:27.828164   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:28.328168   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:28.828627   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:29.328775   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:29.828214   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:30.328277   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:30.828549   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:31.328482   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:31.828402   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:32.327877   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:32.828764   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.328031   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:33.828373   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:34.328417   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:34.827883   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:35.328611   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:35.828369   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:36.328158   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:36.828404   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:37.327714   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:37.828183   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:38.328432   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:38.828619   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:39.328464   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:39.828733   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:40.328692   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:40.827978   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:41.328589   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:41.828084   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:42.327947   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:42.827814   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:43.328619   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:43.827779   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:44.328770   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:44.828429   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:45.328402   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:45.828561   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:46.328733   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:46.828478   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:47.328066   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:47.828102   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:48.327971   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:48.828607   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:49.328568   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:49.827742   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:50.328650   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:50.828376   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:51.328489   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:51.827803   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:52.328543   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:52.828194   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:53.327741   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:53.828510   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:54.328518   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:54.828001   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:55.328146   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:55.828717   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:56.327938   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:56.828723   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:57.328164   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:57.827948   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:58.328295   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:58.828771   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:59.328113   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:05:59.828023   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:00.327856   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:00.828227   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:01.328318   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:01.828377   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:02.328413   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:02.828408   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:02.828482   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:02.865253   62745 cri.go:89] found id: ""
	I1026 02:06:02.865282   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.865292   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:02.865301   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:02.865365   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:02.897413   62745 cri.go:89] found id: ""
	I1026 02:06:02.897455   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.897466   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:02.897473   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:02.897537   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:02.934081   62745 cri.go:89] found id: ""
	I1026 02:06:02.934104   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.934111   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:02.934117   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:02.934168   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:02.965275   62745 cri.go:89] found id: ""
	I1026 02:06:02.965305   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.965316   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:02.965325   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:02.965391   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:02.997817   62745 cri.go:89] found id: ""
	I1026 02:06:02.997847   62745 logs.go:282] 0 containers: []
	W1026 02:06:02.997854   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:02.997861   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:02.997930   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:03.029105   62745 cri.go:89] found id: ""
	I1026 02:06:03.029137   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.029148   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:03.029156   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:03.029214   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:03.061064   62745 cri.go:89] found id: ""
	I1026 02:06:03.061092   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.061103   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:03.061114   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:03.061177   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:03.095111   62745 cri.go:89] found id: ""
	I1026 02:06:03.095154   62745 logs.go:282] 0 containers: []
	W1026 02:06:03.095164   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:03.095184   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:03.095201   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:03.148013   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:03.148044   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:03.160911   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:03.160948   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:03.282690   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:03.282709   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:03.282720   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:03.356710   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:03.356753   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:05.894053   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:05.906753   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:05.906825   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:05.939843   62745 cri.go:89] found id: ""
	I1026 02:06:05.939893   62745 logs.go:282] 0 containers: []
	W1026 02:06:05.939901   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:05.939914   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:05.939962   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:05.971681   62745 cri.go:89] found id: ""
	I1026 02:06:05.971711   62745 logs.go:282] 0 containers: []
	W1026 02:06:05.971724   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:05.971730   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:05.971777   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:06.023889   62745 cri.go:89] found id: ""
	I1026 02:06:06.023923   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.023934   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:06.023943   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:06.023992   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:06.060326   62745 cri.go:89] found id: ""
	I1026 02:06:06.060356   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.060368   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:06.060375   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:06.060437   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:06.093213   62745 cri.go:89] found id: ""
	I1026 02:06:06.093243   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.093259   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:06.093267   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:06.093331   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:06.125005   62745 cri.go:89] found id: ""
	I1026 02:06:06.125032   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.125042   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:06.125049   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:06.125110   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:06.165744   62745 cri.go:89] found id: ""
	I1026 02:06:06.165771   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.165786   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:06.165795   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:06.165858   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:06.198223   62745 cri.go:89] found id: ""
	I1026 02:06:06.198249   62745 logs.go:282] 0 containers: []
	W1026 02:06:06.198258   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:06.198265   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:06.198275   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:06.247162   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:06.247193   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:06.259963   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:06.259986   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:06.329743   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:06.329770   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:06.329787   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:06.402917   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:06.402953   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:08.941593   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:08.954121   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:08.954182   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:08.986088   62745 cri.go:89] found id: ""
	I1026 02:06:08.986115   62745 logs.go:282] 0 containers: []
	W1026 02:06:08.986126   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:08.986133   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:08.986192   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:09.017861   62745 cri.go:89] found id: ""
	I1026 02:06:09.017888   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.017896   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:09.017901   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:09.017948   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:09.050015   62745 cri.go:89] found id: ""
	I1026 02:06:09.050038   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.050046   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:09.050051   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:09.050096   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:09.081336   62745 cri.go:89] found id: ""
	I1026 02:06:09.081359   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.081366   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:09.081371   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:09.081446   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:09.113330   62745 cri.go:89] found id: ""
	I1026 02:06:09.113364   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.113376   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:09.113384   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:09.113468   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:09.146319   62745 cri.go:89] found id: ""
	I1026 02:06:09.146347   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.146358   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:09.146366   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:09.146425   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:09.177827   62745 cri.go:89] found id: ""
	I1026 02:06:09.177854   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.177866   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:09.177874   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:09.177933   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:09.211351   62745 cri.go:89] found id: ""
	I1026 02:06:09.211389   62745 logs.go:282] 0 containers: []
	W1026 02:06:09.211400   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:09.211411   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:09.211425   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:09.283433   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:09.283459   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:09.283474   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:09.361349   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:09.361383   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:09.397461   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:09.397490   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:09.447443   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:09.447474   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:11.961583   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:11.975577   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:11.975638   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:12.011335   62745 cri.go:89] found id: ""
	I1026 02:06:12.011363   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.011372   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:12.011377   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:12.011432   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:12.048024   62745 cri.go:89] found id: ""
	I1026 02:06:12.048048   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.048056   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:12.048062   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:12.048113   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:12.080372   62745 cri.go:89] found id: ""
	I1026 02:06:12.080394   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.080401   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:12.080407   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:12.080456   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:12.112306   62745 cri.go:89] found id: ""
	I1026 02:06:12.112341   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.112352   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:12.112360   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:12.112424   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:12.146551   62745 cri.go:89] found id: ""
	I1026 02:06:12.146578   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.146588   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:12.146595   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:12.146652   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:12.178248   62745 cri.go:89] found id: ""
	I1026 02:06:12.178277   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.178286   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:12.178291   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:12.178348   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:12.210980   62745 cri.go:89] found id: ""
	I1026 02:06:12.211003   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.211010   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:12.211016   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:12.211067   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:12.244863   62745 cri.go:89] found id: ""
	I1026 02:06:12.244890   62745 logs.go:282] 0 containers: []
	W1026 02:06:12.244901   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:12.244910   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:12.244929   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:12.257397   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:12.257434   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:12.326641   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:12.326670   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:12.326682   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:12.400300   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:12.400343   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:12.456354   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:12.456389   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:15.017291   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:15.031144   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:15.031217   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:15.064159   62745 cri.go:89] found id: ""
	I1026 02:06:15.064189   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.064199   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:15.064206   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:15.064268   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:15.096879   62745 cri.go:89] found id: ""
	I1026 02:06:15.096910   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.096917   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:15.096924   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:15.096986   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:15.131602   62745 cri.go:89] found id: ""
	I1026 02:06:15.131623   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.131630   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:15.131636   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:15.131695   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:15.165190   62745 cri.go:89] found id: ""
	I1026 02:06:15.165216   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.165224   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:15.165230   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:15.165289   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:15.197064   62745 cri.go:89] found id: ""
	I1026 02:06:15.197092   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.197100   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:15.197106   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:15.197153   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:15.233806   62745 cri.go:89] found id: ""
	I1026 02:06:15.233836   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.233845   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:15.233852   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:15.233911   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:15.270313   62745 cri.go:89] found id: ""
	I1026 02:06:15.270338   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.270347   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:15.270355   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:15.270414   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:15.303312   62745 cri.go:89] found id: ""
	I1026 02:06:15.303341   62745 logs.go:282] 0 containers: []
	W1026 02:06:15.303351   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:15.303361   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:15.303374   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:15.355400   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:15.355434   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:15.368325   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:15.368356   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:15.444522   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:15.444548   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:15.444560   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:15.522243   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:15.522278   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:18.064129   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:18.076361   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:18.076440   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:18.107859   62745 cri.go:89] found id: ""
	I1026 02:06:18.107894   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.107905   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:18.107914   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:18.107979   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:18.142326   62745 cri.go:89] found id: ""
	I1026 02:06:18.142353   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.142362   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:18.142370   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:18.142433   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:18.182660   62745 cri.go:89] found id: ""
	I1026 02:06:18.182700   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.182710   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:18.182717   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:18.182783   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:18.225675   62745 cri.go:89] found id: ""
	I1026 02:06:18.225702   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.225713   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:18.225721   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:18.225782   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:18.280184   62745 cri.go:89] found id: ""
	I1026 02:06:18.280218   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.280228   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:18.280235   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:18.280297   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:18.314769   62745 cri.go:89] found id: ""
	I1026 02:06:18.314793   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.314803   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:18.314811   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:18.314875   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:18.349686   62745 cri.go:89] found id: ""
	I1026 02:06:18.349712   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.349723   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:18.349731   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:18.349791   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:18.384890   62745 cri.go:89] found id: ""
	I1026 02:06:18.384914   62745 logs.go:282] 0 containers: []
	W1026 02:06:18.384922   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:18.384931   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:18.384951   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:18.436690   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:18.436724   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:18.450449   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:18.450484   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:18.517832   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:18.517858   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:18.517872   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:18.593629   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:18.593671   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:21.132614   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:21.144963   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:21.145024   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:21.178673   62745 cri.go:89] found id: ""
	I1026 02:06:21.178698   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.178712   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:21.178718   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:21.178766   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:21.209604   62745 cri.go:89] found id: ""
	I1026 02:06:21.209625   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.209633   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:21.209638   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:21.209685   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:21.245359   62745 cri.go:89] found id: ""
	I1026 02:06:21.245387   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.245395   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:21.245401   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:21.245478   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:21.280522   62745 cri.go:89] found id: ""
	I1026 02:06:21.280549   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.280560   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:21.280568   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:21.280632   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:21.311215   62745 cri.go:89] found id: ""
	I1026 02:06:21.311258   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.311269   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:21.311277   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:21.311345   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:21.344383   62745 cri.go:89] found id: ""
	I1026 02:06:21.344408   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.344417   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:21.344423   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:21.344470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:21.375505   62745 cri.go:89] found id: ""
	I1026 02:06:21.375529   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.375537   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:21.375543   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:21.375594   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:21.408845   62745 cri.go:89] found id: ""
	I1026 02:06:21.408872   62745 logs.go:282] 0 containers: []
	W1026 02:06:21.408882   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:21.408893   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:21.408907   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:21.460091   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:21.460132   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:21.472960   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:21.472988   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:21.545280   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:21.545307   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:21.545321   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:21.625622   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:21.625660   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:24.163695   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:24.175697   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:24.175768   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:24.207555   62745 cri.go:89] found id: ""
	I1026 02:06:24.207580   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.207590   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:24.207597   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:24.207659   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:24.238550   62745 cri.go:89] found id: ""
	I1026 02:06:24.238577   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.238585   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:24.238593   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:24.238657   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:24.270725   62745 cri.go:89] found id: ""
	I1026 02:06:24.270756   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.270767   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:24.270780   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:24.270840   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:24.304565   62745 cri.go:89] found id: ""
	I1026 02:06:24.304587   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.304595   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:24.304601   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:24.304654   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:24.337792   62745 cri.go:89] found id: ""
	I1026 02:06:24.337820   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.337831   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:24.337840   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:24.337902   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:24.372965   62745 cri.go:89] found id: ""
	I1026 02:06:24.372993   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.373003   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:24.373011   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:24.373071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:24.404874   62745 cri.go:89] found id: ""
	I1026 02:06:24.404902   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.404910   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:24.404915   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:24.404965   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:24.438182   62745 cri.go:89] found id: ""
	I1026 02:06:24.438206   62745 logs.go:282] 0 containers: []
	W1026 02:06:24.438216   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:24.438227   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:24.438241   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:24.487859   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:24.487904   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:24.500443   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:24.500468   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:24.565149   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:24.565173   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:24.565185   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:24.644448   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:24.644483   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:27.190134   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:27.202811   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:27.202866   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:27.234433   62745 cri.go:89] found id: ""
	I1026 02:06:27.234458   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.234469   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:27.234476   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:27.234536   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:27.270714   62745 cri.go:89] found id: ""
	I1026 02:06:27.270736   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.270743   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:27.270750   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:27.270796   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:27.303782   62745 cri.go:89] found id: ""
	I1026 02:06:27.303808   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.303819   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:27.303824   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:27.303873   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:27.333589   62745 cri.go:89] found id: ""
	I1026 02:06:27.333618   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.333629   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:27.333637   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:27.333695   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:27.364461   62745 cri.go:89] found id: ""
	I1026 02:06:27.364490   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.364499   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:27.364506   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:27.364570   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:27.397191   62745 cri.go:89] found id: ""
	I1026 02:06:27.397214   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.397222   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:27.397228   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:27.397288   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:27.427780   62745 cri.go:89] found id: ""
	I1026 02:06:27.427809   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.427819   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:27.427827   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:27.427887   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:27.460702   62745 cri.go:89] found id: ""
	I1026 02:06:27.460728   62745 logs.go:282] 0 containers: []
	W1026 02:06:27.460736   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:27.460745   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:27.460756   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:27.506782   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:27.506815   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:27.519441   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:27.519480   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:27.580627   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:27.580649   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:27.580661   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:27.657114   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:27.657147   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:30.196989   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:30.210008   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:30.210071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:30.243027   62745 cri.go:89] found id: ""
	I1026 02:06:30.243055   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.243064   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:30.243073   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:30.243133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:30.274236   62745 cri.go:89] found id: ""
	I1026 02:06:30.274269   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.274286   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:30.274294   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:30.274354   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:30.307917   62745 cri.go:89] found id: ""
	I1026 02:06:30.307957   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.307968   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:30.307976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:30.308034   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:30.343579   62745 cri.go:89] found id: ""
	I1026 02:06:30.343611   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.343623   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:30.343631   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:30.343691   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:30.375164   62745 cri.go:89] found id: ""
	I1026 02:06:30.375186   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.375193   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:30.375199   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:30.375254   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:30.408895   62745 cri.go:89] found id: ""
	I1026 02:06:30.408920   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.408930   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:30.408938   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:30.409001   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:30.439274   62745 cri.go:89] found id: ""
	I1026 02:06:30.439296   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.439304   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:30.439310   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:30.439370   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:30.471091   62745 cri.go:89] found id: ""
	I1026 02:06:30.471118   62745 logs.go:282] 0 containers: []
	W1026 02:06:30.471130   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:30.471141   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:30.471154   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:30.547117   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:30.547157   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:30.586923   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:30.586956   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:30.636445   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:30.636472   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:30.649546   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:30.649571   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:30.718659   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:33.219071   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:33.232931   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:33.233002   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:33.264587   62745 cri.go:89] found id: ""
	I1026 02:06:33.264621   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.264633   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:33.264642   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:33.264699   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:33.298613   62745 cri.go:89] found id: ""
	I1026 02:06:33.298640   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.298650   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:33.298658   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:33.298724   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:33.330811   62745 cri.go:89] found id: ""
	I1026 02:06:33.330835   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.330842   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:33.330849   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:33.330896   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:33.361120   62745 cri.go:89] found id: ""
	I1026 02:06:33.361148   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.361158   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:33.361166   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:33.361224   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:33.392734   62745 cri.go:89] found id: ""
	I1026 02:06:33.392763   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.392772   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:33.392778   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:33.392836   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:33.429516   62745 cri.go:89] found id: ""
	I1026 02:06:33.429541   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.429549   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:33.429557   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:33.429608   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:33.465411   62745 cri.go:89] found id: ""
	I1026 02:06:33.465462   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.465472   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:33.465478   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:33.465526   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:33.502158   62745 cri.go:89] found id: ""
	I1026 02:06:33.502181   62745 logs.go:282] 0 containers: []
	W1026 02:06:33.502189   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:33.502197   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:33.502209   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:33.516171   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:33.516200   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:33.581371   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:33.581397   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:33.581409   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:33.660245   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:33.660276   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:33.695652   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:33.695680   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:36.246566   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:36.258931   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:36.259002   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:36.290554   62745 cri.go:89] found id: ""
	I1026 02:06:36.290583   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.290594   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:36.290602   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:36.290664   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:36.322351   62745 cri.go:89] found id: ""
	I1026 02:06:36.322380   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.322391   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:36.322400   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:36.322454   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:36.353248   62745 cri.go:89] found id: ""
	I1026 02:06:36.353279   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.353289   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:36.353296   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:36.353352   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:36.386647   62745 cri.go:89] found id: ""
	I1026 02:06:36.386679   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.386687   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:36.386693   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:36.386753   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:36.418688   62745 cri.go:89] found id: ""
	I1026 02:06:36.418714   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.418729   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:36.418738   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:36.418796   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:36.453641   62745 cri.go:89] found id: ""
	I1026 02:06:36.453665   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.453673   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:36.453681   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:36.453736   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:36.486122   62745 cri.go:89] found id: ""
	I1026 02:06:36.486145   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.486152   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:36.486158   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:36.486220   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:36.524894   62745 cri.go:89] found id: ""
	I1026 02:06:36.524918   62745 logs.go:282] 0 containers: []
	W1026 02:06:36.524929   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:36.524938   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:36.524949   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:36.560351   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:36.560380   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:36.610639   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:36.610668   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:36.623311   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:36.623341   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:36.691029   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:36.691048   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:36.691059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:39.266784   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:39.279857   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:39.279930   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:39.314381   62745 cri.go:89] found id: ""
	I1026 02:06:39.314404   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.314414   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:39.314422   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:39.314485   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:39.345165   62745 cri.go:89] found id: ""
	I1026 02:06:39.345189   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.345195   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:39.345202   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:39.345253   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:39.379326   62745 cri.go:89] found id: ""
	I1026 02:06:39.379358   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.379369   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:39.379376   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:39.379428   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:39.410203   62745 cri.go:89] found id: ""
	I1026 02:06:39.410230   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.410238   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:39.410244   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:39.410343   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:39.445836   62745 cri.go:89] found id: ""
	I1026 02:06:39.445864   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.445874   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:39.445880   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:39.445929   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:39.478581   62745 cri.go:89] found id: ""
	I1026 02:06:39.478611   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.478623   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:39.478630   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:39.478701   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:39.516164   62745 cri.go:89] found id: ""
	I1026 02:06:39.516189   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.516197   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:39.516203   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:39.516247   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:39.547114   62745 cri.go:89] found id: ""
	I1026 02:06:39.547145   62745 logs.go:282] 0 containers: []
	W1026 02:06:39.547156   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:39.547168   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:39.547181   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:39.585134   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:39.585160   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:39.638793   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:39.638825   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:39.652471   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:39.652508   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:39.721286   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:39.721315   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:39.721328   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:42.297344   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:42.310372   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:42.310442   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:42.341290   62745 cri.go:89] found id: ""
	I1026 02:06:42.341321   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.341332   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:42.341339   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:42.341402   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:42.381477   62745 cri.go:89] found id: ""
	I1026 02:06:42.381501   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.381509   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:42.381515   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:42.381569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:42.417909   62745 cri.go:89] found id: ""
	I1026 02:06:42.417933   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.417947   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:42.417955   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:42.418015   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:42.453010   62745 cri.go:89] found id: ""
	I1026 02:06:42.453035   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.453043   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:42.453049   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:42.453107   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:42.487736   62745 cri.go:89] found id: ""
	I1026 02:06:42.487764   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.487776   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:42.487783   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:42.487841   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:42.521791   62745 cri.go:89] found id: ""
	I1026 02:06:42.521813   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.521820   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:42.521826   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:42.521875   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:42.553777   62745 cri.go:89] found id: ""
	I1026 02:06:42.553801   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.553808   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:42.553814   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:42.553864   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:42.584374   62745 cri.go:89] found id: ""
	I1026 02:06:42.584394   62745 logs.go:282] 0 containers: []
	W1026 02:06:42.584402   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:42.584410   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:42.584421   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:42.635442   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:42.635480   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:42.648419   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:42.648449   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:42.714599   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:42.714618   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:42.714629   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:42.791928   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:42.791962   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:45.327302   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:45.340107   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:45.340166   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:45.375793   62745 cri.go:89] found id: ""
	I1026 02:06:45.375819   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.375827   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:45.375833   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:45.375890   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:45.407209   62745 cri.go:89] found id: ""
	I1026 02:06:45.407235   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.407243   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:45.407249   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:45.407298   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:45.438793   62745 cri.go:89] found id: ""
	I1026 02:06:45.438825   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.438834   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:45.438841   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:45.438902   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:45.470153   62745 cri.go:89] found id: ""
	I1026 02:06:45.470178   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.470188   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:45.470195   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:45.470256   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:45.501603   62745 cri.go:89] found id: ""
	I1026 02:06:45.501632   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.501642   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:45.501649   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:45.501721   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:45.532431   62745 cri.go:89] found id: ""
	I1026 02:06:45.532457   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.532466   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:45.532472   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:45.532519   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:45.563978   62745 cri.go:89] found id: ""
	I1026 02:06:45.564009   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.564021   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:45.564029   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:45.564092   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:45.596484   62745 cri.go:89] found id: ""
	I1026 02:06:45.596515   62745 logs.go:282] 0 containers: []
	W1026 02:06:45.596526   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:45.596536   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:45.596550   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:45.645740   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:45.645774   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:45.658655   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:45.658678   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:45.722742   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:45.722768   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:45.722797   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:45.800213   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:45.800246   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:48.338048   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:48.350446   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:48.350511   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:48.381651   62745 cri.go:89] found id: ""
	I1026 02:06:48.381675   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.381683   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:48.381689   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:48.381739   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:48.414464   62745 cri.go:89] found id: ""
	I1026 02:06:48.414496   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.414508   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:48.414518   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:48.414578   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:48.446712   62745 cri.go:89] found id: ""
	I1026 02:06:48.446742   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.446775   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:48.446785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:48.446850   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:48.480096   62745 cri.go:89] found id: ""
	I1026 02:06:48.480123   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.480131   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:48.480137   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:48.480191   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:48.514851   62745 cri.go:89] found id: ""
	I1026 02:06:48.514879   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.514890   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:48.514898   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:48.514960   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:48.546665   62745 cri.go:89] found id: ""
	I1026 02:06:48.546690   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.546699   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:48.546706   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:48.546762   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:48.578933   62745 cri.go:89] found id: ""
	I1026 02:06:48.578960   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.578967   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:48.578974   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:48.579033   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:48.610559   62745 cri.go:89] found id: ""
	I1026 02:06:48.610586   62745 logs.go:282] 0 containers: []
	W1026 02:06:48.610594   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:48.610604   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:48.610614   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:48.682337   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:48.682356   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:48.682367   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:48.757174   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:48.757216   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:48.798062   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:48.798093   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:48.846972   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:48.847006   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:51.361120   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:51.373623   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:51.373694   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:51.410403   62745 cri.go:89] found id: ""
	I1026 02:06:51.410429   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.410437   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:51.410443   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:51.410490   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:51.446998   62745 cri.go:89] found id: ""
	I1026 02:06:51.447029   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.447040   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:51.447048   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:51.447119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:51.482389   62745 cri.go:89] found id: ""
	I1026 02:06:51.482416   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.482425   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:51.482430   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:51.482477   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:51.518224   62745 cri.go:89] found id: ""
	I1026 02:06:51.518247   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.518255   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:51.518261   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:51.518311   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:51.554364   62745 cri.go:89] found id: ""
	I1026 02:06:51.554393   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.554400   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:51.554406   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:51.554453   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:51.590162   62745 cri.go:89] found id: ""
	I1026 02:06:51.590184   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.590193   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:51.590199   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:51.590246   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:51.627329   62745 cri.go:89] found id: ""
	I1026 02:06:51.627351   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.627360   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:51.627368   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:51.627422   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:51.662588   62745 cri.go:89] found id: ""
	I1026 02:06:51.662610   62745 logs.go:282] 0 containers: []
	W1026 02:06:51.662618   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:51.662627   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:51.662637   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:51.676043   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:51.676070   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:51.745339   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:51.745369   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:51.745381   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:51.823074   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:51.823113   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:51.864777   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:51.864810   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:54.414558   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:54.426859   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:54.426914   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:54.459308   62745 cri.go:89] found id: ""
	I1026 02:06:54.459336   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.459344   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:54.459350   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:54.459407   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:54.492269   62745 cri.go:89] found id: ""
	I1026 02:06:54.492297   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.492305   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:54.492312   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:54.492362   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:54.529884   62745 cri.go:89] found id: ""
	I1026 02:06:54.529909   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.529919   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:54.529926   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:54.529985   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:54.563565   62745 cri.go:89] found id: ""
	I1026 02:06:54.563587   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.563595   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:54.563601   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:54.563667   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:54.598043   62745 cri.go:89] found id: ""
	I1026 02:06:54.598071   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.598081   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:54.598089   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:54.598154   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:54.630479   62745 cri.go:89] found id: ""
	I1026 02:06:54.630504   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.630514   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:54.630521   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:54.630569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:54.664162   62745 cri.go:89] found id: ""
	I1026 02:06:54.664190   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.664202   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:54.664209   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:54.664263   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:54.695829   62745 cri.go:89] found id: ""
	I1026 02:06:54.695859   62745 logs.go:282] 0 containers: []
	W1026 02:06:54.695869   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:54.695879   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:54.695893   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:54.747091   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:54.747124   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:54.760287   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:54.760313   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:54.829243   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:54.829264   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:54.829276   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:54.905695   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:54.905734   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:06:57.442852   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:06:57.455134   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:06:57.455195   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:06:57.487771   62745 cri.go:89] found id: ""
	I1026 02:06:57.487794   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.487801   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:06:57.487807   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:06:57.487855   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:06:57.522262   62745 cri.go:89] found id: ""
	I1026 02:06:57.522287   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.522294   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:06:57.522300   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:06:57.522357   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:06:57.557463   62745 cri.go:89] found id: ""
	I1026 02:06:57.557497   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.557509   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:06:57.557516   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:06:57.557581   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:06:57.594175   62745 cri.go:89] found id: ""
	I1026 02:06:57.594204   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.594215   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:06:57.594223   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:06:57.594290   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:06:57.631355   62745 cri.go:89] found id: ""
	I1026 02:06:57.631380   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.631389   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:06:57.631397   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:06:57.631460   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:06:57.663128   62745 cri.go:89] found id: ""
	I1026 02:06:57.663156   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.663166   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:06:57.663174   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:06:57.663239   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:06:57.697480   62745 cri.go:89] found id: ""
	I1026 02:06:57.697509   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.697520   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:06:57.697529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:06:57.697591   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:06:57.731295   62745 cri.go:89] found id: ""
	I1026 02:06:57.731328   62745 logs.go:282] 0 containers: []
	W1026 02:06:57.731338   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:06:57.731348   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:06:57.731363   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:06:57.784889   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:06:57.784927   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:06:57.797964   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:06:57.797996   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:06:57.866042   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:06:57.866072   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:06:57.866088   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:06:57.948186   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:06:57.948221   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:00.490019   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:00.505005   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:00.505071   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:00.537331   62745 cri.go:89] found id: ""
	I1026 02:07:00.537356   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.537364   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:00.537370   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:00.537442   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:00.568650   62745 cri.go:89] found id: ""
	I1026 02:07:00.568683   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.568693   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:00.568712   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:00.568764   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:00.600239   62745 cri.go:89] found id: ""
	I1026 02:07:00.600273   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.600283   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:00.600289   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:00.600340   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:00.631784   62745 cri.go:89] found id: ""
	I1026 02:07:00.631807   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.631814   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:00.631820   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:00.631870   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:00.671299   62745 cri.go:89] found id: ""
	I1026 02:07:00.671325   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.671335   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:00.671343   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:00.671402   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:00.704770   62745 cri.go:89] found id: ""
	I1026 02:07:00.704803   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.704815   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:00.704823   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:00.704878   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:00.738455   62745 cri.go:89] found id: ""
	I1026 02:07:00.738483   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.738495   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:00.738504   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:00.738562   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:00.772180   62745 cri.go:89] found id: ""
	I1026 02:07:00.772205   62745 logs.go:282] 0 containers: []
	W1026 02:07:00.772217   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:00.772225   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:00.772238   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:00.784854   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:00.784877   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:00.859263   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:00.859286   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:00.859300   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:00.933055   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:00.933090   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:00.969165   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:00.969194   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:03.521059   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:03.533917   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:03.533980   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:03.567714   62745 cri.go:89] found id: ""
	I1026 02:07:03.567745   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.567756   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:03.567765   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:03.567816   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:03.600069   62745 cri.go:89] found id: ""
	I1026 02:07:03.600096   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.600104   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:03.600109   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:03.600158   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:03.634048   62745 cri.go:89] found id: ""
	I1026 02:07:03.634069   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.634077   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:03.634085   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:03.634147   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:03.666190   62745 cri.go:89] found id: ""
	I1026 02:07:03.666219   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.666227   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:03.666233   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:03.666284   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:03.698739   62745 cri.go:89] found id: ""
	I1026 02:07:03.698762   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.698770   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:03.698776   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:03.698820   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:03.731198   62745 cri.go:89] found id: ""
	I1026 02:07:03.731227   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.731235   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:03.731242   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:03.731295   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:03.763557   62745 cri.go:89] found id: ""
	I1026 02:07:03.763587   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.763598   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:03.763604   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:03.763666   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:03.797591   62745 cri.go:89] found id: ""
	I1026 02:07:03.797624   62745 logs.go:282] 0 containers: []
	W1026 02:07:03.797635   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:03.797646   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:03.797659   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:03.876991   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:03.877030   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:03.914148   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:03.914174   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:03.964260   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:03.964297   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:03.977178   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:03.977207   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:04.044076   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:06.544738   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:06.559517   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:06.559590   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:06.595039   62745 cri.go:89] found id: ""
	I1026 02:07:06.595069   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.595081   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:06.595088   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:06.595150   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:06.634699   62745 cri.go:89] found id: ""
	I1026 02:07:06.634724   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.634734   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:06.634742   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:06.634807   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:06.668025   62745 cri.go:89] found id: ""
	I1026 02:07:06.668057   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.668070   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:06.668077   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:06.668144   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:06.699415   62745 cri.go:89] found id: ""
	I1026 02:07:06.699443   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.699452   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:06.699458   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:06.699518   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:06.731125   62745 cri.go:89] found id: ""
	I1026 02:07:06.731152   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.731163   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:06.731170   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:06.731226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:06.763697   62745 cri.go:89] found id: ""
	I1026 02:07:06.763727   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.763735   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:06.763741   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:06.763797   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:06.796924   62745 cri.go:89] found id: ""
	I1026 02:07:06.796956   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.796964   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:06.796970   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:06.797032   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:06.828696   62745 cri.go:89] found id: ""
	I1026 02:07:06.828724   62745 logs.go:282] 0 containers: []
	W1026 02:07:06.828734   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:06.828745   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:06.828762   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:06.878771   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:06.878816   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:06.892038   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:06.892065   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:06.961856   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:06.961883   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:06.961897   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:07.035069   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:07.035102   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:09.571983   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:09.584509   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:09.584583   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:09.619361   62745 cri.go:89] found id: ""
	I1026 02:07:09.619389   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.619400   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:09.619409   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:09.619469   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:09.653625   62745 cri.go:89] found id: ""
	I1026 02:07:09.653653   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.653663   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:09.653671   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:09.653734   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:09.692876   62745 cri.go:89] found id: ""
	I1026 02:07:09.692906   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.692920   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:09.692927   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:09.692989   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:09.726058   62745 cri.go:89] found id: ""
	I1026 02:07:09.726080   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.726088   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:09.726094   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:09.726142   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:09.767085   62745 cri.go:89] found id: ""
	I1026 02:07:09.767106   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.767114   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:09.767120   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:09.767171   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:09.800385   62745 cri.go:89] found id: ""
	I1026 02:07:09.800411   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.800421   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:09.800429   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:09.800490   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:09.833916   62745 cri.go:89] found id: ""
	I1026 02:07:09.833945   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.833955   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:09.833962   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:09.834024   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:09.870980   62745 cri.go:89] found id: ""
	I1026 02:07:09.871011   62745 logs.go:282] 0 containers: []
	W1026 02:07:09.871023   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:09.871034   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:09.871045   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:09.911303   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:09.911339   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:09.985639   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:09.985682   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:10.005161   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:10.005191   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:10.075685   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:10.075707   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:10.075721   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:12.652289   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:12.664631   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:12.664706   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:12.702751   62745 cri.go:89] found id: ""
	I1026 02:07:12.702782   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.702793   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:12.702801   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:12.702856   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:12.736207   62745 cri.go:89] found id: ""
	I1026 02:07:12.736230   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.736240   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:12.736248   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:12.736312   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:12.767932   62745 cri.go:89] found id: ""
	I1026 02:07:12.767962   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.767972   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:12.767980   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:12.768037   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:12.799843   62745 cri.go:89] found id: ""
	I1026 02:07:12.799869   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.799877   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:12.799894   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:12.799947   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:12.831972   62745 cri.go:89] found id: ""
	I1026 02:07:12.832002   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.832014   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:12.832021   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:12.832084   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:12.865967   62745 cri.go:89] found id: ""
	I1026 02:07:12.865995   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.866005   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:12.866013   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:12.866073   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:12.901089   62745 cri.go:89] found id: ""
	I1026 02:07:12.901117   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.901125   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:12.901132   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:12.901187   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:12.933143   62745 cri.go:89] found id: ""
	I1026 02:07:12.933170   62745 logs.go:282] 0 containers: []
	W1026 02:07:12.933178   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:12.933186   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:12.933195   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:13.016014   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:13.016059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:13.058520   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:13.058556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:13.110178   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:13.110219   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:13.124831   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:13.124865   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:13.195503   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:15.695875   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:15.711218   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:15.711288   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:15.769098   62745 cri.go:89] found id: ""
	I1026 02:07:15.769121   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.769129   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:15.769135   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:15.769189   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:15.805018   62745 cri.go:89] found id: ""
	I1026 02:07:15.805046   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.805054   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:15.805061   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:15.805125   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:15.842671   62745 cri.go:89] found id: ""
	I1026 02:07:15.842694   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.842702   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:15.842709   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:15.842757   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:15.874827   62745 cri.go:89] found id: ""
	I1026 02:07:15.874862   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.874873   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:15.874882   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:15.874942   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:15.908597   62745 cri.go:89] found id: ""
	I1026 02:07:15.908623   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.908648   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:15.908655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:15.908713   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:15.943192   62745 cri.go:89] found id: ""
	I1026 02:07:15.943226   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.943237   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:15.943243   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:15.943313   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:15.982067   62745 cri.go:89] found id: ""
	I1026 02:07:15.982096   62745 logs.go:282] 0 containers: []
	W1026 02:07:15.982107   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:15.982114   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:15.982173   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:16.013666   62745 cri.go:89] found id: ""
	I1026 02:07:16.013695   62745 logs.go:282] 0 containers: []
	W1026 02:07:16.013706   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:16.013717   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:16.013732   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:16.064292   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:16.064328   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:16.077236   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:16.077262   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:16.148584   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:16.148612   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:16.148626   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:16.226871   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:16.226905   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:18.765112   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:18.780092   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:18.780166   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:18.813016   62745 cri.go:89] found id: ""
	I1026 02:07:18.813040   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.813047   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:18.813053   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:18.813102   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:18.850376   62745 cri.go:89] found id: ""
	I1026 02:07:18.850399   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.850410   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:18.850417   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:18.850475   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:18.882562   62745 cri.go:89] found id: ""
	I1026 02:07:18.882589   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.882600   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:18.882607   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:18.882665   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:18.915214   62745 cri.go:89] found id: ""
	I1026 02:07:18.915243   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.915253   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:18.915259   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:18.915319   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:18.946171   62745 cri.go:89] found id: ""
	I1026 02:07:18.946197   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.946205   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:18.946211   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:18.946258   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:18.978013   62745 cri.go:89] found id: ""
	I1026 02:07:18.978041   62745 logs.go:282] 0 containers: []
	W1026 02:07:18.978049   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:18.978055   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:18.978111   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:19.016121   62745 cri.go:89] found id: ""
	I1026 02:07:19.016149   62745 logs.go:282] 0 containers: []
	W1026 02:07:19.016161   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:19.016169   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:19.016226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:19.047167   62745 cri.go:89] found id: ""
	I1026 02:07:19.047196   62745 logs.go:282] 0 containers: []
	W1026 02:07:19.047204   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:19.047213   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:19.047222   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:19.098945   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:19.098981   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:19.111645   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:19.111675   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:19.178986   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:19.179001   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:19.179012   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:19.251707   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:19.251741   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:21.790677   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:21.803898   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:21.803981   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:21.837240   62745 cri.go:89] found id: ""
	I1026 02:07:21.837267   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.837277   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:21.837283   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:21.837330   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:21.869245   62745 cri.go:89] found id: ""
	I1026 02:07:21.869276   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.869287   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:21.869296   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:21.869356   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:21.899736   62745 cri.go:89] found id: ""
	I1026 02:07:21.899762   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.899771   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:21.899777   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:21.899827   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:21.931420   62745 cri.go:89] found id: ""
	I1026 02:07:21.931439   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.931446   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:21.931453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:21.931498   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:21.963732   62745 cri.go:89] found id: ""
	I1026 02:07:21.963760   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.963768   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:21.963774   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:21.963823   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:21.994522   62745 cri.go:89] found id: ""
	I1026 02:07:21.994550   62745 logs.go:282] 0 containers: []
	W1026 02:07:21.994560   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:21.994567   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:21.994628   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:22.028461   62745 cri.go:89] found id: ""
	I1026 02:07:22.028487   62745 logs.go:282] 0 containers: []
	W1026 02:07:22.028495   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:22.028501   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:22.028548   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:22.069623   62745 cri.go:89] found id: ""
	I1026 02:07:22.069677   62745 logs.go:282] 0 containers: []
	W1026 02:07:22.069692   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:22.069703   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:22.069716   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:22.121635   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:22.121670   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:22.135584   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:22.135617   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:22.199981   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:22.200005   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:22.200021   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:22.279029   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:22.279060   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:24.817446   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:24.830485   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:24.830554   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:24.862966   62745 cri.go:89] found id: ""
	I1026 02:07:24.862999   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.863007   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:24.863013   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:24.863070   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:24.894041   62745 cri.go:89] found id: ""
	I1026 02:07:24.894073   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.894084   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:24.894089   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:24.894150   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:24.927062   62745 cri.go:89] found id: ""
	I1026 02:07:24.927093   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.927102   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:24.927108   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:24.927172   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:24.963297   62745 cri.go:89] found id: ""
	I1026 02:07:24.963329   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.963340   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:24.963347   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:24.963409   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:24.998408   62745 cri.go:89] found id: ""
	I1026 02:07:24.998437   62745 logs.go:282] 0 containers: []
	W1026 02:07:24.998446   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:24.998453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:24.998511   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:25.029763   62745 cri.go:89] found id: ""
	I1026 02:07:25.029787   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.029795   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:25.029801   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:25.029859   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:25.066700   62745 cri.go:89] found id: ""
	I1026 02:07:25.066723   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.066730   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:25.066736   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:25.066786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:25.099954   62745 cri.go:89] found id: ""
	I1026 02:07:25.099984   62745 logs.go:282] 0 containers: []
	W1026 02:07:25.099995   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:25.100006   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:25.100021   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:25.149728   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:25.149762   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:25.163029   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:25.163077   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:25.234081   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:25.234103   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:25.234118   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:25.318655   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:25.318690   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:27.862030   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:27.874072   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:27.874138   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:27.905856   62745 cri.go:89] found id: ""
	I1026 02:07:27.905887   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.905895   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:27.905901   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:27.905960   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:27.938698   62745 cri.go:89] found id: ""
	I1026 02:07:27.938724   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.938733   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:27.938738   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:27.938786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:27.971463   62745 cri.go:89] found id: ""
	I1026 02:07:27.971488   62745 logs.go:282] 0 containers: []
	W1026 02:07:27.971495   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:27.971501   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:27.971550   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:28.005774   62745 cri.go:89] found id: ""
	I1026 02:07:28.005802   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.005810   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:28.005815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:28.005867   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:28.038145   62745 cri.go:89] found id: ""
	I1026 02:07:28.038171   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.038179   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:28.038185   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:28.038240   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:28.069925   62745 cri.go:89] found id: ""
	I1026 02:07:28.069956   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.069967   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:28.069976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:28.070030   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:28.102171   62745 cri.go:89] found id: ""
	I1026 02:07:28.102198   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.102206   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:28.102212   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:28.102269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:28.137136   62745 cri.go:89] found id: ""
	I1026 02:07:28.137160   62745 logs.go:282] 0 containers: []
	W1026 02:07:28.137170   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:28.137180   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:28.137204   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:28.187087   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:28.187122   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:28.200246   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:28.200272   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:28.268977   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:28.268997   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:28.269011   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:28.348053   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:28.348085   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:30.885122   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:30.897635   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:30.897708   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:30.929357   62745 cri.go:89] found id: ""
	I1026 02:07:30.929381   62745 logs.go:282] 0 containers: []
	W1026 02:07:30.929389   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:30.929395   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:30.929470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:30.968281   62745 cri.go:89] found id: ""
	I1026 02:07:30.968313   62745 logs.go:282] 0 containers: []
	W1026 02:07:30.968323   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:30.968330   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:30.968390   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:31.002710   62745 cri.go:89] found id: ""
	I1026 02:07:31.002739   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.002749   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:31.002755   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:31.002815   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:31.034820   62745 cri.go:89] found id: ""
	I1026 02:07:31.034845   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.034853   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:31.034858   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:31.034904   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:31.066878   62745 cri.go:89] found id: ""
	I1026 02:07:31.066906   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.066913   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:31.066926   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:31.066976   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:31.099026   62745 cri.go:89] found id: ""
	I1026 02:07:31.099052   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.099060   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:31.099066   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:31.099119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:31.133025   62745 cri.go:89] found id: ""
	I1026 02:07:31.133056   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.133065   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:31.133070   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:31.133119   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:31.165739   62745 cri.go:89] found id: ""
	I1026 02:07:31.165774   62745 logs.go:282] 0 containers: []
	W1026 02:07:31.165785   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:31.165795   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:31.165809   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:31.233734   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:31.233756   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:31.233767   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:31.313364   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:31.313396   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:31.349829   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:31.349864   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:31.400897   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:31.400932   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:33.914141   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:33.926206   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:33.926284   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:33.960359   62745 cri.go:89] found id: ""
	I1026 02:07:33.960390   62745 logs.go:282] 0 containers: []
	W1026 02:07:33.960401   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:33.960408   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:33.960461   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:33.991394   62745 cri.go:89] found id: ""
	I1026 02:07:33.991419   62745 logs.go:282] 0 containers: []
	W1026 02:07:33.991427   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:33.991433   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:33.991491   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:34.023354   62745 cri.go:89] found id: ""
	I1026 02:07:34.023383   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.023394   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:34.023402   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:34.023459   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:34.054427   62745 cri.go:89] found id: ""
	I1026 02:07:34.054452   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.054463   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:34.054470   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:34.054529   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:34.084889   62745 cri.go:89] found id: ""
	I1026 02:07:34.084912   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.084919   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:34.084924   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:34.084975   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:34.116018   62745 cri.go:89] found id: ""
	I1026 02:07:34.116052   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.116063   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:34.116071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:34.116136   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:34.151471   62745 cri.go:89] found id: ""
	I1026 02:07:34.151497   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.151505   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:34.151512   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:34.151558   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:34.186774   62745 cri.go:89] found id: ""
	I1026 02:07:34.186807   62745 logs.go:282] 0 containers: []
	W1026 02:07:34.186819   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:34.186831   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:34.186852   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:34.257139   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:34.257159   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:34.257170   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:34.338903   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:34.338935   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:34.375388   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:34.375419   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:34.422999   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:34.423032   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:36.937328   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:36.949435   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:36.949509   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:36.984087   62745 cri.go:89] found id: ""
	I1026 02:07:36.984124   62745 logs.go:282] 0 containers: []
	W1026 02:07:36.984136   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:36.984145   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:36.984206   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:37.019912   62745 cri.go:89] found id: ""
	I1026 02:07:37.019939   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.019947   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:37.019954   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:37.020010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:37.053267   62745 cri.go:89] found id: ""
	I1026 02:07:37.053298   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.053309   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:37.053317   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:37.053378   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:37.085611   62745 cri.go:89] found id: ""
	I1026 02:07:37.085638   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.085646   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:37.085652   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:37.085719   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:37.122232   62745 cri.go:89] found id: ""
	I1026 02:07:37.122261   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.122273   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:37.122281   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:37.122341   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:37.157453   62745 cri.go:89] found id: ""
	I1026 02:07:37.157484   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.157497   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:37.157506   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:37.157571   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:37.190447   62745 cri.go:89] found id: ""
	I1026 02:07:37.190499   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.190511   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:37.190520   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:37.190579   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:37.222653   62745 cri.go:89] found id: ""
	I1026 02:07:37.222693   62745 logs.go:282] 0 containers: []
	W1026 02:07:37.222704   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:37.222715   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:37.222727   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:37.300290   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:37.300334   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:37.342382   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:37.342410   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:37.390612   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:37.390648   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:37.405298   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:37.405324   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:37.468405   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:39.969006   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:39.981596   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:39.981663   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:40.014471   62745 cri.go:89] found id: ""
	I1026 02:07:40.014498   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.014506   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:40.014513   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:40.014572   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:40.044844   62745 cri.go:89] found id: ""
	I1026 02:07:40.044864   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.044872   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:40.044877   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:40.044931   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:40.076739   62745 cri.go:89] found id: ""
	I1026 02:07:40.076767   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.076778   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:40.076785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:40.076847   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:40.113147   62745 cri.go:89] found id: ""
	I1026 02:07:40.113173   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.113185   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:40.113193   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:40.113248   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:40.144403   62745 cri.go:89] found id: ""
	I1026 02:07:40.144431   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.144441   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:40.144449   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:40.144497   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:40.176560   62745 cri.go:89] found id: ""
	I1026 02:07:40.176585   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.176593   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:40.176599   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:40.176647   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:40.208831   62745 cri.go:89] found id: ""
	I1026 02:07:40.208864   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.208884   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:40.208892   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:40.208949   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:40.247489   62745 cri.go:89] found id: ""
	I1026 02:07:40.247516   62745 logs.go:282] 0 containers: []
	W1026 02:07:40.247527   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:40.247538   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:40.247556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:40.300537   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:40.300570   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:40.313996   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:40.314025   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:40.382390   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:40.382411   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:40.382422   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:40.454832   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:40.454866   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:42.990657   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:43.002906   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:43.002980   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:43.038888   62745 cri.go:89] found id: ""
	I1026 02:07:43.038921   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.038934   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:43.038942   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:43.039007   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:43.071463   62745 cri.go:89] found id: ""
	I1026 02:07:43.071490   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.071500   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:43.071507   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:43.071569   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:43.104362   62745 cri.go:89] found id: ""
	I1026 02:07:43.104392   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.104403   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:43.104411   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:43.104469   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:43.137037   62745 cri.go:89] found id: ""
	I1026 02:07:43.137069   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.137080   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:43.137087   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:43.137140   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:43.170616   62745 cri.go:89] found id: ""
	I1026 02:07:43.170641   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.170649   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:43.170655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:43.170709   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:43.203376   62745 cri.go:89] found id: ""
	I1026 02:07:43.203404   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.203412   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:43.203417   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:43.203471   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:43.235154   62745 cri.go:89] found id: ""
	I1026 02:07:43.235177   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.235185   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:43.235190   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:43.235241   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:43.268212   62745 cri.go:89] found id: ""
	I1026 02:07:43.268236   62745 logs.go:282] 0 containers: []
	W1026 02:07:43.268248   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:43.268258   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:43.268270   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:43.339460   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:43.339479   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:43.339493   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:43.422470   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:43.422508   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:43.460588   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:43.460613   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:43.509466   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:43.509500   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:46.023798   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:46.036335   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:46.036394   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:46.069673   62745 cri.go:89] found id: ""
	I1026 02:07:46.069698   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.069706   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:46.069712   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:46.069760   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:46.101565   62745 cri.go:89] found id: ""
	I1026 02:07:46.101590   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.101599   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:46.101606   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:46.101668   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:46.133748   62745 cri.go:89] found id: ""
	I1026 02:07:46.133776   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.133786   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:46.133794   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:46.133851   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:46.164918   62745 cri.go:89] found id: ""
	I1026 02:07:46.164953   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.164963   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:46.164972   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:46.165029   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:46.198417   62745 cri.go:89] found id: ""
	I1026 02:07:46.198439   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.198446   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:46.198452   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:46.198507   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:46.233857   62745 cri.go:89] found id: ""
	I1026 02:07:46.233882   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.233891   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:46.233896   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:46.233943   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:46.267445   62745 cri.go:89] found id: ""
	I1026 02:07:46.267476   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.267485   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:46.267498   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:46.267547   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:46.300564   62745 cri.go:89] found id: ""
	I1026 02:07:46.300594   62745 logs.go:282] 0 containers: []
	W1026 02:07:46.300601   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:46.300609   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:46.300619   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:46.353129   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:46.353163   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:46.366154   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:46.366183   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:46.439252   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:46.439271   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:46.439286   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:46.519713   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:46.519748   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:49.057451   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:49.070194   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:49.070269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:49.102886   62745 cri.go:89] found id: ""
	I1026 02:07:49.102915   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.102926   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:49.102935   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:49.102994   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:49.134727   62745 cri.go:89] found id: ""
	I1026 02:07:49.134755   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.134765   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:49.134773   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:49.134832   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:49.166121   62745 cri.go:89] found id: ""
	I1026 02:07:49.166148   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.166158   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:49.166166   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:49.166223   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:49.197999   62745 cri.go:89] found id: ""
	I1026 02:07:49.198033   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.198045   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:49.198052   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:49.198111   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:49.231619   62745 cri.go:89] found id: ""
	I1026 02:07:49.231649   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.231661   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:49.231669   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:49.231733   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:49.264930   62745 cri.go:89] found id: ""
	I1026 02:07:49.264961   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.264973   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:49.264981   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:49.265040   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:49.298194   62745 cri.go:89] found id: ""
	I1026 02:07:49.298226   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.298237   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:49.298244   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:49.298304   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:49.330293   62745 cri.go:89] found id: ""
	I1026 02:07:49.330325   62745 logs.go:282] 0 containers: []
	W1026 02:07:49.330336   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:49.330346   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:49.330361   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:49.365408   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:49.365457   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:49.415642   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:49.415677   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:49.428140   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:49.428168   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:49.499178   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:49.499205   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:49.499220   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:52.079906   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:52.093071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:52.093149   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:52.126358   62745 cri.go:89] found id: ""
	I1026 02:07:52.126381   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.126389   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:52.126402   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:52.126461   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:52.159802   62745 cri.go:89] found id: ""
	I1026 02:07:52.159833   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.159844   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:52.159852   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:52.159914   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:52.194500   62745 cri.go:89] found id: ""
	I1026 02:07:52.194530   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.194541   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:52.194555   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:52.194616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:52.229565   62745 cri.go:89] found id: ""
	I1026 02:07:52.229589   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.229597   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:52.229603   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:52.229664   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:52.265769   62745 cri.go:89] found id: ""
	I1026 02:07:52.265808   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.265819   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:52.265827   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:52.265887   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:52.299292   62745 cri.go:89] found id: ""
	I1026 02:07:52.299316   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.299324   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:52.299330   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:52.299384   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:52.332085   62745 cri.go:89] found id: ""
	I1026 02:07:52.332108   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.332116   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:52.332122   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:52.332180   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:52.364882   62745 cri.go:89] found id: ""
	I1026 02:07:52.364907   62745 logs.go:282] 0 containers: []
	W1026 02:07:52.364915   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:52.364923   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:52.364934   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:52.401295   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:52.401326   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:52.452282   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:52.452315   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:52.465630   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:52.465659   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:52.532282   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:52.532303   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:52.532316   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:55.107880   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:55.120420   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:55.120498   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:55.154952   62745 cri.go:89] found id: ""
	I1026 02:07:55.154981   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.154991   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:55.154997   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:55.155046   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:55.189882   62745 cri.go:89] found id: ""
	I1026 02:07:55.189909   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.189919   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:55.189935   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:55.189985   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:55.221941   62745 cri.go:89] found id: ""
	I1026 02:07:55.221965   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.221973   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:55.221979   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:55.222027   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:55.268127   62745 cri.go:89] found id: ""
	I1026 02:07:55.268155   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.268165   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:55.268173   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:55.268229   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:55.301559   62745 cri.go:89] found id: ""
	I1026 02:07:55.301583   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.301591   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:55.301597   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:55.301644   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:55.335479   62745 cri.go:89] found id: ""
	I1026 02:07:55.335509   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.335521   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:55.335529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:55.335601   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:55.366749   62745 cri.go:89] found id: ""
	I1026 02:07:55.366771   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.366779   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:55.366785   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:55.366847   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:55.397880   62745 cri.go:89] found id: ""
	I1026 02:07:55.397906   62745 logs.go:282] 0 containers: []
	W1026 02:07:55.397912   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:55.397920   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:55.397937   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:55.465665   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:55.465688   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:55.465704   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:55.543012   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:55.543052   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:07:55.578358   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:55.578388   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:55.631250   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:55.631282   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:58.144367   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:07:58.156714   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:07:58.156792   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:07:58.189562   62745 cri.go:89] found id: ""
	I1026 02:07:58.189587   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.189595   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:07:58.189626   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:07:58.189687   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:07:58.222695   62745 cri.go:89] found id: ""
	I1026 02:07:58.222721   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.222729   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:07:58.222735   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:07:58.222795   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:07:58.260873   62745 cri.go:89] found id: ""
	I1026 02:07:58.260904   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.260916   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:07:58.260924   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:07:58.260991   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:07:58.294508   62745 cri.go:89] found id: ""
	I1026 02:07:58.294535   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.294546   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:07:58.294553   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:07:58.294616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:07:58.327554   62745 cri.go:89] found id: ""
	I1026 02:07:58.327575   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.327582   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:07:58.327588   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:07:58.327649   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:07:58.364191   62745 cri.go:89] found id: ""
	I1026 02:07:58.364221   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.364229   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:07:58.364235   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:07:58.364294   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:07:58.395374   62745 cri.go:89] found id: ""
	I1026 02:07:58.395399   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.395407   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:07:58.395413   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:07:58.395470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:07:58.428051   62745 cri.go:89] found id: ""
	I1026 02:07:58.428094   62745 logs.go:282] 0 containers: []
	W1026 02:07:58.428105   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:07:58.428115   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:07:58.428130   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:07:58.478234   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:07:58.478270   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:07:58.490968   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:07:58.490991   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:07:58.570380   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:07:58.570402   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:07:58.570414   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:07:58.648280   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:07:58.648313   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:01.184828   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:01.197285   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:01.197344   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:01.232327   62745 cri.go:89] found id: ""
	I1026 02:08:01.232352   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.232360   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:01.232366   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:01.232413   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:01.264467   62745 cri.go:89] found id: ""
	I1026 02:08:01.264495   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.264507   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:01.264514   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:01.264564   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:01.306169   62745 cri.go:89] found id: ""
	I1026 02:08:01.306195   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.306205   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:01.306213   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:01.306279   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:01.339428   62745 cri.go:89] found id: ""
	I1026 02:08:01.339456   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.339468   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:01.339476   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:01.339537   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:01.371483   62745 cri.go:89] found id: ""
	I1026 02:08:01.371514   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.371525   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:01.371533   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:01.371594   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:01.403778   62745 cri.go:89] found id: ""
	I1026 02:08:01.403801   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.403809   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:01.403815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:01.403866   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:01.436030   62745 cri.go:89] found id: ""
	I1026 02:08:01.436054   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.436064   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:01.436071   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:01.436133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:01.469437   62745 cri.go:89] found id: ""
	I1026 02:08:01.469471   62745 logs.go:282] 0 containers: []
	W1026 02:08:01.469481   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:01.469492   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:01.469506   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:01.518183   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:01.518218   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:01.531223   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:01.531255   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:01.596036   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:01.596063   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:01.596080   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:01.672819   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:01.672856   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:04.239826   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:04.254481   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:04.254545   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:04.295642   62745 cri.go:89] found id: ""
	I1026 02:08:04.295674   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.295683   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:04.295689   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:04.295738   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:04.328260   62745 cri.go:89] found id: ""
	I1026 02:08:04.328281   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.328289   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:04.328295   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:04.328342   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:04.364236   62745 cri.go:89] found id: ""
	I1026 02:08:04.364262   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.364271   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:04.364278   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:04.364340   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:04.397430   62745 cri.go:89] found id: ""
	I1026 02:08:04.397457   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.397466   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:04.397474   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:04.397533   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:04.433899   62745 cri.go:89] found id: ""
	I1026 02:08:04.433927   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.433938   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:04.433945   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:04.434010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:04.472230   62745 cri.go:89] found id: ""
	I1026 02:08:04.472263   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.472274   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:04.472281   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:04.472341   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:04.509655   62745 cri.go:89] found id: ""
	I1026 02:08:04.509679   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.509689   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:04.509695   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:04.509757   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:04.546581   62745 cri.go:89] found id: ""
	I1026 02:08:04.546610   62745 logs.go:282] 0 containers: []
	W1026 02:08:04.546622   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:04.546630   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:04.546641   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:04.620875   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:04.620898   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:04.620912   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:04.695375   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:04.695410   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:04.731475   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:04.731505   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:04.785649   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:04.785677   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:07.300233   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:07.312696   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:07.312767   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:07.349242   62745 cri.go:89] found id: ""
	I1026 02:08:07.349274   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.349285   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:07.349292   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:07.349357   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:07.382578   62745 cri.go:89] found id: ""
	I1026 02:08:07.382606   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.382616   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:07.382623   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:07.382683   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:07.423434   62745 cri.go:89] found id: ""
	I1026 02:08:07.423465   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.423477   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:07.423484   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:07.423542   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:07.464035   62745 cri.go:89] found id: ""
	I1026 02:08:07.464058   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.464065   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:07.464070   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:07.464122   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:07.508768   62745 cri.go:89] found id: ""
	I1026 02:08:07.508794   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.508802   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:07.508808   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:07.508854   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:07.542755   62745 cri.go:89] found id: ""
	I1026 02:08:07.542784   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.542792   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:07.542798   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:07.542843   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:07.573819   62745 cri.go:89] found id: ""
	I1026 02:08:07.573850   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.573860   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:07.573868   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:07.573926   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:07.610126   62745 cri.go:89] found id: ""
	I1026 02:08:07.610150   62745 logs.go:282] 0 containers: []
	W1026 02:08:07.610163   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:07.610170   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:07.610182   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:07.650919   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:07.650950   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:07.703138   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:07.703174   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:07.716055   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:07.716078   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:07.783214   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:07.783236   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:07.783250   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:10.357930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:10.372839   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:10.372911   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:10.408799   62745 cri.go:89] found id: ""
	I1026 02:08:10.408823   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.408832   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:10.408838   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:10.408896   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:10.444727   62745 cri.go:89] found id: ""
	I1026 02:08:10.444759   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.444774   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:10.444781   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:10.444840   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:10.477628   62745 cri.go:89] found id: ""
	I1026 02:08:10.477659   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.477668   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:10.477674   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:10.477732   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:10.518985   62745 cri.go:89] found id: ""
	I1026 02:08:10.519010   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.519021   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:10.519028   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:10.519091   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:10.551984   62745 cri.go:89] found id: ""
	I1026 02:08:10.552011   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.552019   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:10.552026   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:10.552086   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:10.583502   62745 cri.go:89] found id: ""
	I1026 02:08:10.583530   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.583540   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:10.583548   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:10.583615   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:10.615570   62745 cri.go:89] found id: ""
	I1026 02:08:10.615600   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.615611   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:10.615619   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:10.615680   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:10.660675   62745 cri.go:89] found id: ""
	I1026 02:08:10.660714   62745 logs.go:282] 0 containers: []
	W1026 02:08:10.660725   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:10.660737   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:10.660750   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:10.711969   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:10.712001   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:10.725496   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:10.725523   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:10.790976   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:10.791002   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:10.791016   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:10.871832   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:10.871865   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:13.409930   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:13.422624   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:13.422705   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:13.455147   62745 cri.go:89] found id: ""
	I1026 02:08:13.455174   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.455185   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:13.455192   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:13.455261   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:13.486676   62745 cri.go:89] found id: ""
	I1026 02:08:13.486700   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.486709   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:13.486715   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:13.486769   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:13.518163   62745 cri.go:89] found id: ""
	I1026 02:08:13.518190   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.518198   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:13.518204   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:13.518259   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:13.550442   62745 cri.go:89] found id: ""
	I1026 02:08:13.550472   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.550480   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:13.550486   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:13.550546   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:13.581575   62745 cri.go:89] found id: ""
	I1026 02:08:13.581604   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.581626   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:13.581632   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:13.581689   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:13.617049   62745 cri.go:89] found id: ""
	I1026 02:08:13.617085   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.617097   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:13.617105   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:13.617157   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:13.650969   62745 cri.go:89] found id: ""
	I1026 02:08:13.650994   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.651004   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:13.651012   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:13.651073   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:13.688760   62745 cri.go:89] found id: ""
	I1026 02:08:13.688785   62745 logs.go:282] 0 containers: []
	W1026 02:08:13.688792   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:13.688800   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:13.688810   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:13.737744   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:13.737783   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:13.750768   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:13.750792   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:13.825287   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:13.825312   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:13.825325   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:13.903847   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:13.903889   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:16.440337   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:16.454191   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:16.454252   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:16.495504   62745 cri.go:89] found id: ""
	I1026 02:08:16.495537   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.495549   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:16.495556   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:16.495616   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:16.529098   62745 cri.go:89] found id: ""
	I1026 02:08:16.529125   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.529134   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:16.529140   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:16.529188   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:16.565347   62745 cri.go:89] found id: ""
	I1026 02:08:16.565376   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.565384   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:16.565390   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:16.565462   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:16.602635   62745 cri.go:89] found id: ""
	I1026 02:08:16.602659   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.602667   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:16.602674   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:16.602725   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:16.634610   62745 cri.go:89] found id: ""
	I1026 02:08:16.634636   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.634646   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:16.634655   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:16.634723   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:16.665466   62745 cri.go:89] found id: ""
	I1026 02:08:16.665495   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.665508   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:16.665516   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:16.665574   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:16.705100   62745 cri.go:89] found id: ""
	I1026 02:08:16.705130   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.705142   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:16.705150   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:16.705209   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:16.738037   62745 cri.go:89] found id: ""
	I1026 02:08:16.738067   62745 logs.go:282] 0 containers: []
	W1026 02:08:16.738075   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:16.738083   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:16.738094   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:16.773953   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:16.773978   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:16.825028   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:16.825063   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:16.837494   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:16.837524   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:16.912281   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:16.912298   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:16.912311   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:19.493012   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:19.505677   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:19.505752   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:19.537587   62745 cri.go:89] found id: ""
	I1026 02:08:19.537609   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.537618   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:19.537630   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:19.537702   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:19.569151   62745 cri.go:89] found id: ""
	I1026 02:08:19.569180   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.569191   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:19.569199   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:19.569259   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:19.602798   62745 cri.go:89] found id: ""
	I1026 02:08:19.602829   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.602840   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:19.602848   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:19.602906   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:19.635291   62745 cri.go:89] found id: ""
	I1026 02:08:19.635313   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.635320   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:19.635326   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:19.635381   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:19.670775   62745 cri.go:89] found id: ""
	I1026 02:08:19.670801   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.670808   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:19.670815   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:19.670863   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:19.707295   62745 cri.go:89] found id: ""
	I1026 02:08:19.707322   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.707333   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:19.707341   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:19.707408   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:19.741160   62745 cri.go:89] found id: ""
	I1026 02:08:19.741181   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.741189   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:19.741195   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:19.741255   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:19.772764   62745 cri.go:89] found id: ""
	I1026 02:08:19.772797   62745 logs.go:282] 0 containers: []
	W1026 02:08:19.772807   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:19.772816   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:19.772827   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:19.820416   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:19.820455   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:19.833864   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:19.833892   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:19.901887   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:19.901912   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:19.901926   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:19.975742   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:19.975777   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:22.513110   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:22.525810   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:22.525885   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:22.558634   62745 cri.go:89] found id: ""
	I1026 02:08:22.558665   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.558676   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:22.558683   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:22.558740   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:22.590074   62745 cri.go:89] found id: ""
	I1026 02:08:22.590100   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.590109   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:22.590115   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:22.590171   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:22.622736   62745 cri.go:89] found id: ""
	I1026 02:08:22.622759   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.622766   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:22.622773   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:22.622826   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:22.660241   62745 cri.go:89] found id: ""
	I1026 02:08:22.660278   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.660289   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:22.660297   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:22.660358   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:22.694328   62745 cri.go:89] found id: ""
	I1026 02:08:22.694352   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.694362   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:22.694369   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:22.694435   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:22.725943   62745 cri.go:89] found id: ""
	I1026 02:08:22.725973   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.725982   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:22.725990   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:22.726050   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:22.761196   62745 cri.go:89] found id: ""
	I1026 02:08:22.761221   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.761230   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:22.761237   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:22.761300   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:22.794536   62745 cri.go:89] found id: ""
	I1026 02:08:22.794557   62745 logs.go:282] 0 containers: []
	W1026 02:08:22.794564   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:22.794571   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:22.794583   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:22.806661   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:22.806685   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:22.871740   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:22.871760   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:22.871774   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:22.946659   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:22.946694   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:22.986919   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:22.986944   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:25.532589   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:25.544793   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:25.544862   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:25.578566   62745 cri.go:89] found id: ""
	I1026 02:08:25.578596   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.578605   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:25.578611   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:25.578668   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:25.611999   62745 cri.go:89] found id: ""
	I1026 02:08:25.612023   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.612031   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:25.612037   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:25.612095   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:25.644308   62745 cri.go:89] found id: ""
	I1026 02:08:25.644330   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.644338   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:25.644344   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:25.644408   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:25.676010   62745 cri.go:89] found id: ""
	I1026 02:08:25.676036   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.676044   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:25.676051   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:25.676109   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:25.711676   62745 cri.go:89] found id: ""
	I1026 02:08:25.711704   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.711712   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:25.711719   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:25.711771   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:25.747402   62745 cri.go:89] found id: ""
	I1026 02:08:25.747429   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.747440   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:25.747448   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:25.747497   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:25.783460   62745 cri.go:89] found id: ""
	I1026 02:08:25.783483   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.783492   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:25.783499   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:25.783556   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:25.815189   62745 cri.go:89] found id: ""
	I1026 02:08:25.815218   62745 logs.go:282] 0 containers: []
	W1026 02:08:25.815232   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:25.815242   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:25.815256   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:25.890691   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:25.890731   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:25.930586   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:25.930621   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:25.980506   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:25.980540   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:25.993501   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:25.993532   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:26.054846   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:28.556014   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:28.568620   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:28.568680   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:28.603011   62745 cri.go:89] found id: ""
	I1026 02:08:28.603041   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.603052   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:28.603062   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:28.603125   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:28.638080   62745 cri.go:89] found id: ""
	I1026 02:08:28.638114   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.638124   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:28.638133   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:28.638195   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:28.673207   62745 cri.go:89] found id: ""
	I1026 02:08:28.673234   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.673245   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:28.673251   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:28.673306   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:28.709564   62745 cri.go:89] found id: ""
	I1026 02:08:28.709587   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.709596   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:28.709602   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:28.709660   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:28.745873   62745 cri.go:89] found id: ""
	I1026 02:08:28.745899   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.745907   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:28.745913   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:28.745978   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:28.779839   62745 cri.go:89] found id: ""
	I1026 02:08:28.779865   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.779876   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:28.779892   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:28.779948   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:28.813925   62745 cri.go:89] found id: ""
	I1026 02:08:28.813949   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.813957   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:28.813964   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:28.814010   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:28.847919   62745 cri.go:89] found id: ""
	I1026 02:08:28.847944   62745 logs.go:282] 0 containers: []
	W1026 02:08:28.847951   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:28.847961   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:28.847973   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:28.916176   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:28.916197   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:28.916209   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:28.996542   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:28.996577   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:29.037045   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:29.037070   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:29.087027   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:29.087059   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:31.603457   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:31.615817   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:31.615876   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:31.651806   62745 cri.go:89] found id: ""
	I1026 02:08:31.651830   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.651840   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:31.651848   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:31.651908   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:31.684606   62745 cri.go:89] found id: ""
	I1026 02:08:31.684635   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.684645   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:31.684653   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:31.684712   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:31.717923   62745 cri.go:89] found id: ""
	I1026 02:08:31.717954   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.717966   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:31.717976   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:31.718041   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:31.751740   62745 cri.go:89] found id: ""
	I1026 02:08:31.751770   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.751781   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:31.751789   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:31.751848   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:31.784175   62745 cri.go:89] found id: ""
	I1026 02:08:31.784244   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.784261   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:31.784271   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:31.784330   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:31.817523   62745 cri.go:89] found id: ""
	I1026 02:08:31.817552   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.817563   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:31.817572   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:31.817634   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:31.849001   62745 cri.go:89] found id: ""
	I1026 02:08:31.849034   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.849047   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:31.849055   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:31.849105   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:31.879403   62745 cri.go:89] found id: ""
	I1026 02:08:31.879431   62745 logs.go:282] 0 containers: []
	W1026 02:08:31.879456   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:31.879464   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:31.879487   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:31.942447   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:31.942474   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:31.942488   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:32.021986   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:32.022022   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:32.056609   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:32.056636   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:32.105273   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:32.105304   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:34.618372   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:34.630895   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:34.630972   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:34.665359   62745 cri.go:89] found id: ""
	I1026 02:08:34.665390   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.665402   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:34.665410   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:34.665486   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:34.696082   62745 cri.go:89] found id: ""
	I1026 02:08:34.696109   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.696118   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:34.696126   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:34.696190   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:34.728736   62745 cri.go:89] found id: ""
	I1026 02:08:34.728763   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.728772   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:34.728778   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:34.728834   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:34.760581   62745 cri.go:89] found id: ""
	I1026 02:08:34.760614   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.760625   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:34.760633   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:34.760690   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:34.792050   62745 cri.go:89] found id: ""
	I1026 02:08:34.792071   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.792079   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:34.792085   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:34.792141   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:34.823661   62745 cri.go:89] found id: ""
	I1026 02:08:34.823689   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.823704   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:34.823710   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:34.823758   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:34.858707   62745 cri.go:89] found id: ""
	I1026 02:08:34.858732   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.858743   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:34.858751   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:34.858809   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:34.889620   62745 cri.go:89] found id: ""
	I1026 02:08:34.889648   62745 logs.go:282] 0 containers: []
	W1026 02:08:34.889660   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:34.889670   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:34.889683   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:34.938323   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:34.938355   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:34.950839   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:34.950864   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:35.022103   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:35.022131   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:35.022146   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:35.105889   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:35.105933   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:37.647963   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:37.660729   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:37.660801   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:37.694126   62745 cri.go:89] found id: ""
	I1026 02:08:37.694154   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.694165   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:37.694173   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:37.694226   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:37.725639   62745 cri.go:89] found id: ""
	I1026 02:08:37.725671   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.725681   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:37.725693   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:37.725742   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:37.757094   62745 cri.go:89] found id: ""
	I1026 02:08:37.757121   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.757132   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:37.757140   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:37.757199   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:37.790413   62745 cri.go:89] found id: ""
	I1026 02:08:37.790440   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.790447   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:37.790453   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:37.790500   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:37.824258   62745 cri.go:89] found id: ""
	I1026 02:08:37.824284   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.824292   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:37.824298   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:37.824345   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:37.854922   62745 cri.go:89] found id: ""
	I1026 02:08:37.854957   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.854969   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:37.854978   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:37.855043   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:37.891129   62745 cri.go:89] found id: ""
	I1026 02:08:37.891157   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.891168   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:37.891175   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:37.891236   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:37.925548   62745 cri.go:89] found id: ""
	I1026 02:08:37.925582   62745 logs.go:282] 0 containers: []
	W1026 02:08:37.925594   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:37.925605   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:37.925618   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:38.003275   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:38.003308   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:38.044114   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:38.044147   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:38.098885   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:38.098916   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:38.111804   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:38.111829   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:38.175922   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:40.676707   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:40.689205   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:40.689269   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:40.721318   62745 cri.go:89] found id: ""
	I1026 02:08:40.721346   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.721354   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:40.721360   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:40.721438   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:40.753839   62745 cri.go:89] found id: ""
	I1026 02:08:40.753872   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.753883   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:40.753891   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:40.753953   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:40.787788   62745 cri.go:89] found id: ""
	I1026 02:08:40.787815   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.787827   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:40.787835   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:40.787892   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:40.822322   62745 cri.go:89] found id: ""
	I1026 02:08:40.822353   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.822365   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:40.822373   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:40.822437   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:40.855255   62745 cri.go:89] found id: ""
	I1026 02:08:40.855281   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.855291   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:40.855299   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:40.855358   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:40.888181   62745 cri.go:89] found id: ""
	I1026 02:08:40.888206   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.888215   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:40.888220   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:40.888271   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:40.924334   62745 cri.go:89] found id: ""
	I1026 02:08:40.924361   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.924370   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:40.924376   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:40.924426   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:40.961191   62745 cri.go:89] found id: ""
	I1026 02:08:40.961216   62745 logs.go:282] 0 containers: []
	W1026 02:08:40.961224   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:40.961231   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:40.961261   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:40.973567   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:40.973590   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:41.039495   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:41.039515   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:41.039527   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:41.116293   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:41.116330   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:41.153112   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:41.153138   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:43.702627   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:43.715096   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:43.715160   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:43.751422   62745 cri.go:89] found id: ""
	I1026 02:08:43.751452   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.751460   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:43.751468   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:43.751531   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:43.785497   62745 cri.go:89] found id: ""
	I1026 02:08:43.785522   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.785529   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:43.785534   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:43.785578   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:43.817202   62745 cri.go:89] found id: ""
	I1026 02:08:43.817226   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.817233   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:43.817240   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:43.817299   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:43.849679   62745 cri.go:89] found id: ""
	I1026 02:08:43.849700   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.849707   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:43.849713   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:43.849771   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:43.881980   62745 cri.go:89] found id: ""
	I1026 02:08:43.882006   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.882017   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:43.882024   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:43.882085   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:43.912117   62745 cri.go:89] found id: ""
	I1026 02:08:43.912143   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.912155   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:43.912162   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:43.912224   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:43.946380   62745 cri.go:89] found id: ""
	I1026 02:08:43.946407   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.946414   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:43.946420   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:43.946470   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:43.982498   62745 cri.go:89] found id: ""
	I1026 02:08:43.982533   62745 logs.go:282] 0 containers: []
	W1026 02:08:43.982544   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:43.982555   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:43.982568   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:44.059851   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:44.059889   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:44.097961   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:44.097994   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:44.150021   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:44.150064   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:44.163400   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:44.163421   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:44.229895   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:46.730182   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:46.743267   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:46.743346   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:46.777313   62745 cri.go:89] found id: ""
	I1026 02:08:46.777346   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.777358   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:46.777365   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:46.777444   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:46.810378   62745 cri.go:89] found id: ""
	I1026 02:08:46.810416   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.810428   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:46.810436   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:46.810502   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:46.842669   62745 cri.go:89] found id: ""
	I1026 02:08:46.842700   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.842710   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:46.842718   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:46.842779   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:46.875247   62745 cri.go:89] found id: ""
	I1026 02:08:46.875274   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.875285   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:46.875292   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:46.875355   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:46.905475   62745 cri.go:89] found id: ""
	I1026 02:08:46.905501   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.905509   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:46.905514   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:46.905563   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:46.936029   62745 cri.go:89] found id: ""
	I1026 02:08:46.936050   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.936057   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:46.936064   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:46.936108   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:46.968276   62745 cri.go:89] found id: ""
	I1026 02:08:46.968308   62745 logs.go:282] 0 containers: []
	W1026 02:08:46.968319   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:46.968326   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:46.968388   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:47.014097   62745 cri.go:89] found id: ""
	I1026 02:08:47.014124   62745 logs.go:282] 0 containers: []
	W1026 02:08:47.014132   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:47.014140   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:47.014152   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:47.052220   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:47.052244   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:47.107413   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:47.107458   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:47.119973   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:47.120001   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:47.190031   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:47.190049   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:47.190060   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:49.764726   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:49.777467   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:49.777541   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:49.808972   62745 cri.go:89] found id: ""
	I1026 02:08:49.809002   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.809013   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:49.809021   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:49.809084   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:49.841093   62745 cri.go:89] found id: ""
	I1026 02:08:49.841122   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.841130   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:49.841136   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:49.841193   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:49.875478   62745 cri.go:89] found id: ""
	I1026 02:08:49.875509   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.875521   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:49.875529   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:49.875595   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:49.908860   62745 cri.go:89] found id: ""
	I1026 02:08:49.908891   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.908901   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:49.908907   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:49.908972   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:49.941113   62745 cri.go:89] found id: ""
	I1026 02:08:49.941137   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.941144   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:49.941150   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:49.941198   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:49.973200   62745 cri.go:89] found id: ""
	I1026 02:08:49.973228   62745 logs.go:282] 0 containers: []
	W1026 02:08:49.973239   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:49.973247   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:49.973307   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:50.006174   62745 cri.go:89] found id: ""
	I1026 02:08:50.006203   62745 logs.go:282] 0 containers: []
	W1026 02:08:50.006213   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:50.006221   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:50.006291   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:50.039623   62745 cri.go:89] found id: ""
	I1026 02:08:50.039652   62745 logs.go:282] 0 containers: []
	W1026 02:08:50.039675   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:50.039686   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:50.039701   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:50.091561   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:50.091600   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:50.105026   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:50.105054   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:50.174188   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:50.174211   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:50.174226   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:50.256489   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:50.256525   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:52.795154   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:52.807276   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:52.807342   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:52.842173   62745 cri.go:89] found id: ""
	I1026 02:08:52.842199   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.842210   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:52.842218   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:52.842270   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:52.875913   62745 cri.go:89] found id: ""
	I1026 02:08:52.875942   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.875953   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:52.875960   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:52.876020   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:52.906944   62745 cri.go:89] found id: ""
	I1026 02:08:52.906972   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.906980   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:52.906988   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:52.907046   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:52.939621   62745 cri.go:89] found id: ""
	I1026 02:08:52.939653   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.939664   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:52.939671   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:52.939786   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:52.970960   62745 cri.go:89] found id: ""
	I1026 02:08:52.970992   62745 logs.go:282] 0 containers: []
	W1026 02:08:52.971003   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:52.971011   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:52.971079   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:53.003974   62745 cri.go:89] found id: ""
	I1026 02:08:53.004005   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.004016   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:53.004024   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:53.004083   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:53.036906   62745 cri.go:89] found id: ""
	I1026 02:08:53.036930   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.036938   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:53.036944   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:53.036998   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:53.066878   62745 cri.go:89] found id: ""
	I1026 02:08:53.066904   62745 logs.go:282] 0 containers: []
	W1026 02:08:53.066924   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:53.066934   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:53.066948   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:53.079228   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:53.079250   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:53.143347   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:53.143378   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:53.143391   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:53.218363   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:53.218399   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:53.254757   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:53.254793   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:55.806558   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:55.819075   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:55.819143   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:55.851175   62745 cri.go:89] found id: ""
	I1026 02:08:55.851197   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.851205   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:55.851211   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:55.851270   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:55.882873   62745 cri.go:89] found id: ""
	I1026 02:08:55.882900   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.882909   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:55.882918   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:55.882979   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:55.915889   62745 cri.go:89] found id: ""
	I1026 02:08:55.915911   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.915922   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:55.915927   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:55.915983   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:55.948031   62745 cri.go:89] found id: ""
	I1026 02:08:55.948060   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.948072   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:55.948079   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:55.948136   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:55.979736   62745 cri.go:89] found id: ""
	I1026 02:08:55.979762   62745 logs.go:282] 0 containers: []
	W1026 02:08:55.979771   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:55.979781   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:55.979829   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:56.011942   62745 cri.go:89] found id: ""
	I1026 02:08:56.011975   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.011983   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:56.011990   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:56.012042   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:56.047602   62745 cri.go:89] found id: ""
	I1026 02:08:56.047630   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.047638   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:56.047645   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:56.047732   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:56.078132   62745 cri.go:89] found id: ""
	I1026 02:08:56.078162   62745 logs.go:282] 0 containers: []
	W1026 02:08:56.078172   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:56.078183   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:56.078202   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:56.090232   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:56.090259   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:56.152734   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:56.152757   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:56.152770   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:56.234437   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:56.234471   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:08:56.273058   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:56.273088   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:58.827935   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:08:58.840067   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:08:58.840133   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:08:58.872130   62745 cri.go:89] found id: ""
	I1026 02:08:58.872155   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.872163   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:08:58.872169   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:08:58.872219   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:08:58.904718   62745 cri.go:89] found id: ""
	I1026 02:08:58.904744   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.904752   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:08:58.904757   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:08:58.904804   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:08:58.936774   62745 cri.go:89] found id: ""
	I1026 02:08:58.936797   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.936806   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:08:58.936814   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:08:58.936872   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:08:58.972820   62745 cri.go:89] found id: ""
	I1026 02:08:58.972841   62745 logs.go:282] 0 containers: []
	W1026 02:08:58.972848   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:08:58.972855   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:08:58.972912   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:08:59.006748   62745 cri.go:89] found id: ""
	I1026 02:08:59.006780   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.006791   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:08:59.006799   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:08:59.006851   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:08:59.037699   62745 cri.go:89] found id: ""
	I1026 02:08:59.037726   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.037735   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:08:59.037742   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:08:59.037807   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:08:59.068083   62745 cri.go:89] found id: ""
	I1026 02:08:59.068105   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.068112   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:08:59.068118   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:08:59.068164   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:08:59.098128   62745 cri.go:89] found id: ""
	I1026 02:08:59.098158   62745 logs.go:282] 0 containers: []
	W1026 02:08:59.098168   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:08:59.098179   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:08:59.098195   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:08:59.149525   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:08:59.149556   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:08:59.170062   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:08:59.170092   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:08:59.274024   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:08:59.274047   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:08:59.274063   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:08:59.347546   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:08:59.347579   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:01.882822   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:01.896765   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:09:01.896832   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:09:01.934973   62745 cri.go:89] found id: ""
	I1026 02:09:01.935002   62745 logs.go:282] 0 containers: []
	W1026 02:09:01.935010   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:09:01.935016   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:09:01.935069   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:09:01.972272   62745 cri.go:89] found id: ""
	I1026 02:09:01.972299   62745 logs.go:282] 0 containers: []
	W1026 02:09:01.972307   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:09:01.972312   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:09:01.972364   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:09:02.007986   62745 cri.go:89] found id: ""
	I1026 02:09:02.008015   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.008026   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:09:02.008035   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:09:02.008100   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:09:02.041798   62745 cri.go:89] found id: ""
	I1026 02:09:02.041827   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.041837   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:09:02.041845   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:09:02.041912   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:09:02.077088   62745 cri.go:89] found id: ""
	I1026 02:09:02.077116   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.077123   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:09:02.077129   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:09:02.077180   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:09:02.114603   62745 cri.go:89] found id: ""
	I1026 02:09:02.114630   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.114638   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:09:02.114645   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:09:02.114705   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:09:02.149124   62745 cri.go:89] found id: ""
	I1026 02:09:02.149153   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.149165   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:09:02.149172   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:09:02.149236   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:09:02.183885   62745 cri.go:89] found id: ""
	I1026 02:09:02.183916   62745 logs.go:282] 0 containers: []
	W1026 02:09:02.183927   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:09:02.183937   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:09:02.183950   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:09:02.266206   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:09:02.266245   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:09:02.305679   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:09:02.305711   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:09:02.355932   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:09:02.355972   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:09:02.369288   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:09:02.369316   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:09:02.433916   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 02:09:04.935049   62745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:09:04.953402   62745 kubeadm.go:597] duration metric: took 4m3.741693828s to restartPrimaryControlPlane
	W1026 02:09:04.953503   62745 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 02:09:04.953540   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:09:10.050421   62745 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096859319s)
	I1026 02:09:10.050506   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:09:10.065231   62745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:09:10.075554   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:09:10.085543   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:09:10.085565   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:09:10.085631   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:09:10.094991   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:09:10.095054   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:09:10.104635   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:09:10.113803   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:09:10.113864   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:09:10.123460   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:09:10.132411   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:09:10.132472   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:09:10.141558   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:09:10.150054   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:09:10.150111   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:09:10.161808   62745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:09:10.231369   62745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 02:09:10.231494   62745 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:09:10.394653   62745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:09:10.394842   62745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:09:10.394994   62745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 02:09:10.583351   62745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:09:10.585369   62745 out.go:235]   - Generating certificates and keys ...
	I1026 02:09:10.585500   62745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:09:10.585590   62745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:09:10.585697   62745 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:09:10.585791   62745 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:09:10.585898   62745 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:09:10.585980   62745 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:09:10.586195   62745 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:09:10.586557   62745 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:09:10.586950   62745 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:09:10.587291   62745 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:09:10.587415   62745 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:09:10.587504   62745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:09:10.860465   62745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:09:11.279436   62745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:09:11.406209   62745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:09:11.681643   62745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:09:11.696371   62745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:09:11.697571   62745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:09:11.697642   62745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:09:11.833212   62745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:09:11.834981   62745 out.go:235]   - Booting up control plane ...
	I1026 02:09:11.835117   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:09:11.840834   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:09:11.843456   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:09:11.843554   62745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:09:11.846464   62745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 02:09:51.847828   62745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 02:09:51.847957   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:09:51.848200   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:09:56.848464   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:09:56.848669   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:10:06.849190   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:10:06.849488   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:10:26.850376   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:10:26.850598   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:06.852492   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:06.852819   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:06.852842   62745 kubeadm.go:310] 
	I1026 02:11:06.852910   62745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 02:11:06.852968   62745 kubeadm.go:310] 		timed out waiting for the condition
	I1026 02:11:06.852992   62745 kubeadm.go:310] 
	I1026 02:11:06.853048   62745 kubeadm.go:310] 	This error is likely caused by:
	I1026 02:11:06.853094   62745 kubeadm.go:310] 		- The kubelet is not running
	I1026 02:11:06.853225   62745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:11:06.853236   62745 kubeadm.go:310] 
	I1026 02:11:06.853361   62745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:11:06.853441   62745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 02:11:06.853495   62745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 02:11:06.853505   62745 kubeadm.go:310] 
	I1026 02:11:06.853653   62745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:11:06.853784   62745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:11:06.853804   62745 kubeadm.go:310] 
	I1026 02:11:06.853970   62745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 02:11:06.854059   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:11:06.854125   62745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 02:11:06.854224   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:11:06.854250   62745 kubeadm.go:310] 
	I1026 02:11:06.854678   62745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:11:06.854754   62745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 02:11:06.854813   62745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1026 02:11:06.854943   62745 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1026 02:11:06.854989   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 02:11:12.306225   62745 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.451210775s)
	I1026 02:11:12.306315   62745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:11:12.319822   62745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:11:12.328677   62745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:11:12.328703   62745 kubeadm.go:157] found existing configuration files:
	
	I1026 02:11:12.328749   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:11:12.337470   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:11:12.337528   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:11:12.346110   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:11:12.354217   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:11:12.354268   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:11:12.362806   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:11:12.371067   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:11:12.371119   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:11:12.379886   62745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:11:12.388326   62745 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:11:12.388390   62745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:11:12.396637   62745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:11:12.462439   62745 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1026 02:11:12.462496   62745 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:11:12.611392   62745 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:11:12.611545   62745 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:11:12.611700   62745 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 02:11:12.792037   62745 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:11:12.793412   62745 out.go:235]   - Generating certificates and keys ...
	I1026 02:11:12.793523   62745 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:11:12.793617   62745 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:11:12.793756   62745 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 02:11:12.793840   62745 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1026 02:11:12.793948   62745 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 02:11:12.794019   62745 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1026 02:11:12.794117   62745 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1026 02:11:12.794214   62745 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1026 02:11:12.794327   62745 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 02:11:12.794393   62745 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 02:11:12.794427   62745 kubeadm.go:310] [certs] Using the existing "sa" key
	I1026 02:11:12.794482   62745 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:11:13.022002   62745 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:11:13.257574   62745 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:11:13.433187   62745 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:11:13.566478   62745 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:11:13.582104   62745 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:11:13.583267   62745 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:11:13.583340   62745 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:11:13.736073   62745 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:11:13.738713   62745 out.go:235]   - Booting up control plane ...
	I1026 02:11:13.738828   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:11:13.738921   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:11:13.741059   62745 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:11:13.742288   62745 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:11:13.747621   62745 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 02:11:53.753616   62745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 02:11:53.753760   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:53.754045   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:58.754371   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:58.754630   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:12:08.755338   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:12:08.755604   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:12:28.755162   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:12:28.755376   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:13:08.754281   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:13:08.754546   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:13:08.754571   62745 kubeadm.go:310] 
	I1026 02:13:08.754618   62745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 02:13:08.754657   62745 kubeadm.go:310] 		timed out waiting for the condition
	I1026 02:13:08.754663   62745 kubeadm.go:310] 
	I1026 02:13:08.754698   62745 kubeadm.go:310] 	This error is likely caused by:
	I1026 02:13:08.754729   62745 kubeadm.go:310] 		- The kubelet is not running
	I1026 02:13:08.754845   62745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:13:08.754858   62745 kubeadm.go:310] 
	I1026 02:13:08.755003   62745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:13:08.755055   62745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 02:13:08.755098   62745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 02:13:08.755108   62745 kubeadm.go:310] 
	I1026 02:13:08.755234   62745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:13:08.755325   62745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:13:08.755336   62745 kubeadm.go:310] 
	I1026 02:13:08.755472   62745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 02:13:08.755590   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:13:08.755717   62745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 02:13:08.755808   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:13:08.755822   62745 kubeadm.go:310] 
	I1026 02:13:08.756320   62745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:13:08.756433   62745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 02:13:08.756533   62745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 02:13:08.756597   62745 kubeadm.go:394] duration metric: took 8m7.60525109s to StartCluster
	I1026 02:13:08.756658   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:13:08.756718   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:13:08.801578   62745 cri.go:89] found id: ""
	I1026 02:13:08.801601   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.801611   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:13:08.801619   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:13:08.801676   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:13:08.838217   62745 cri.go:89] found id: ""
	I1026 02:13:08.838243   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.838251   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:13:08.838256   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:13:08.838310   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:13:08.874828   62745 cri.go:89] found id: ""
	I1026 02:13:08.874850   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.874858   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:13:08.874864   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:13:08.874910   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:13:08.908817   62745 cri.go:89] found id: ""
	I1026 02:13:08.908849   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.908861   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:13:08.908868   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:13:08.908929   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:13:08.940204   62745 cri.go:89] found id: ""
	I1026 02:13:08.940232   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.940243   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:13:08.940250   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:13:08.940311   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:13:08.972715   62745 cri.go:89] found id: ""
	I1026 02:13:08.972745   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.972755   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:13:08.972761   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:13:08.972811   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:13:09.005171   62745 cri.go:89] found id: ""
	I1026 02:13:09.005200   62745 logs.go:282] 0 containers: []
	W1026 02:13:09.005211   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:13:09.005218   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:13:09.005290   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:13:09.037041   62745 cri.go:89] found id: ""
	I1026 02:13:09.037064   62745 logs.go:282] 0 containers: []
	W1026 02:13:09.037072   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:13:09.037081   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:13:09.037090   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:13:09.145798   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:13:09.145829   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:13:09.188261   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:13:09.188294   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:13:09.258267   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:13:09.258299   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:13:09.280494   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:13:09.280525   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:13:09.352147   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1026 02:13:09.352184   62745 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 02:13:09.352232   62745 out.go:270] * 
	* 
	W1026 02:13:09.352290   62745 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:13:09.352306   62745 out.go:270] * 
	* 
	W1026 02:13:09.353312   62745 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 02:13:09.356458   62745 out.go:201] 
	W1026 02:13:09.357637   62745 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:13:09.357681   62745 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 02:13:09.357700   62745 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 02:13:09.359166   62745 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-385716 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (224.320265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-385716 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:11:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:11:23.323660   65754 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:11:23.323755   65754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:11:23.323762   65754 out.go:358] Setting ErrFile to fd 2...
	I1026 02:11:23.323766   65754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:11:23.323968   65754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:11:23.324507   65754 out.go:352] Setting JSON to false
	I1026 02:11:23.325390   65754 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6823,"bootTime":1729901860,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:11:23.325482   65754 start.go:139] virtualization: kvm guest
	I1026 02:11:23.327609   65754 out.go:177] * [default-k8s-diff-port-661357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:11:23.329450   65754 notify.go:220] Checking for updates...
	I1026 02:11:23.329470   65754 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:11:23.330626   65754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:11:23.331836   65754 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:11:23.332883   65754 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:11:23.333910   65754 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:11:23.334988   65754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:11:23.336418   65754 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:11:23.336515   65754 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:11:23.336596   65754 config.go:182] Loaded profile config "old-k8s-version-385716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 02:11:23.336692   65754 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:11:23.372416   65754 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 02:11:23.373538   65754 start.go:297] selected driver: kvm2
	I1026 02:11:23.373552   65754 start.go:901] validating driver "kvm2" against <nil>
	I1026 02:11:23.373562   65754 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:11:23.374269   65754 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:11:23.374332   65754 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:11:23.389972   65754 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:11:23.390030   65754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 02:11:23.390290   65754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:11:23.390323   65754 cni.go:84] Creating CNI manager for ""
	I1026 02:11:23.390362   65754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:11:23.390370   65754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 02:11:23.390418   65754 start.go:340] cluster config:
	{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:11:23.390514   65754 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:11:23.392404   65754 out.go:177] * Starting "default-k8s-diff-port-661357" primary control-plane node in "default-k8s-diff-port-661357" cluster
	I1026 02:11:23.393721   65754 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:11:23.393759   65754 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:11:23.393766   65754 cache.go:56] Caching tarball of preloaded images
	I1026 02:11:23.393860   65754 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:11:23.393873   65754 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:11:23.393964   65754 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:11:23.393982   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json: {Name:mk27f28daf19c13bc051c7034107fcb68b9f309c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:23.394144   65754 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:11:23.394182   65754 start.go:364] duration metric: took 18.348µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:11:23.394206   65754 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:11:23.394276   65754 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 02:11:23.395967   65754 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 02:11:23.396101   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:11:23.396148   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:11:23.410998   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I1026 02:11:23.411428   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:11:23.411971   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:11:23.411993   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:11:23.412378   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:11:23.412556   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:11:23.412709   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:23.412836   65754 start.go:159] libmachine.API.Create for "default-k8s-diff-port-661357" (driver="kvm2")
	I1026 02:11:23.412867   65754 client.go:168] LocalClient.Create starting
	I1026 02:11:23.412900   65754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 02:11:23.412938   65754 main.go:141] libmachine: Decoding PEM data...
	I1026 02:11:23.412961   65754 main.go:141] libmachine: Parsing certificate...
	I1026 02:11:23.413024   65754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 02:11:23.413049   65754 main.go:141] libmachine: Decoding PEM data...
	I1026 02:11:23.413070   65754 main.go:141] libmachine: Parsing certificate...
	I1026 02:11:23.413093   65754 main.go:141] libmachine: Running pre-create checks...
	I1026 02:11:23.413105   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .PreCreateCheck
	I1026 02:11:23.413425   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetConfigRaw
	I1026 02:11:23.413790   65754 main.go:141] libmachine: Creating machine...
	I1026 02:11:23.413802   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Create
	I1026 02:11:23.413943   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Creating KVM machine...
	I1026 02:11:23.415223   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found existing default KVM network
	I1026 02:11:23.416338   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.416185   65777 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5f:ea:4a} reservation:<nil>}
	I1026 02:11:23.417120   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.417059   65777 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5c:67:14} reservation:<nil>}
	I1026 02:11:23.417987   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.417919   65777 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:79:f3:4c} reservation:<nil>}
	I1026 02:11:23.419099   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.419017   65777 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00032af90}
	I1026 02:11:23.419130   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | created network xml: 
	I1026 02:11:23.419144   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | <network>
	I1026 02:11:23.419155   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   <name>mk-default-k8s-diff-port-661357</name>
	I1026 02:11:23.419160   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   <dns enable='no'/>
	I1026 02:11:23.419172   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   
	I1026 02:11:23.419180   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1026 02:11:23.419186   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |     <dhcp>
	I1026 02:11:23.419200   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1026 02:11:23.419212   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |     </dhcp>
	I1026 02:11:23.419221   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   </ip>
	I1026 02:11:23.419232   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG |   
	I1026 02:11:23.419247   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | </network>
	I1026 02:11:23.419297   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | 
	I1026 02:11:23.424375   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | trying to create private KVM network mk-default-k8s-diff-port-661357 192.168.72.0/24...
	I1026 02:11:23.493608   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | private KVM network mk-default-k8s-diff-port-661357 192.168.72.0/24 created
	I1026 02:11:23.493654   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.493601   65777 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:11:23.493667   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357 ...
	I1026 02:11:23.493687   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 02:11:23.493746   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 02:11:23.745312   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.745214   65777 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa...
	I1026 02:11:23.802297   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.802202   65777 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/default-k8s-diff-port-661357.rawdisk...
	I1026 02:11:23.802338   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Writing magic tar header
	I1026 02:11:23.802351   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Writing SSH key tar header
	I1026 02:11:23.802364   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:23.802333   65777 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357 ...
	I1026 02:11:23.802505   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357
	I1026 02:11:23.802531   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357 (perms=drwx------)
	I1026 02:11:23.802542   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 02:11:23.802574   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:11:23.802589   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 02:11:23.802601   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 02:11:23.802616   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 02:11:23.802627   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home/jenkins
	I1026 02:11:23.802643   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Checking permissions on dir: /home
	I1026 02:11:23.802655   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Skipping /home - not owner
	I1026 02:11:23.802679   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 02:11:23.802697   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 02:11:23.802727   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 02:11:23.802758   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 02:11:23.802772   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Creating domain...
	I1026 02:11:23.803751   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) define libvirt domain using xml: 
	I1026 02:11:23.803776   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) <domain type='kvm'>
	I1026 02:11:23.803787   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <name>default-k8s-diff-port-661357</name>
	I1026 02:11:23.803800   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <memory unit='MiB'>2200</memory>
	I1026 02:11:23.803806   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <vcpu>2</vcpu>
	I1026 02:11:23.803812   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <features>
	I1026 02:11:23.803819   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <acpi/>
	I1026 02:11:23.803826   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <apic/>
	I1026 02:11:23.803831   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <pae/>
	I1026 02:11:23.803837   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     
	I1026 02:11:23.803842   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   </features>
	I1026 02:11:23.803859   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <cpu mode='host-passthrough'>
	I1026 02:11:23.803866   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   
	I1026 02:11:23.803870   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   </cpu>
	I1026 02:11:23.803877   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <os>
	I1026 02:11:23.803882   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <type>hvm</type>
	I1026 02:11:23.803889   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <boot dev='cdrom'/>
	I1026 02:11:23.803896   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <boot dev='hd'/>
	I1026 02:11:23.803921   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <bootmenu enable='no'/>
	I1026 02:11:23.803952   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   </os>
	I1026 02:11:23.803961   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   <devices>
	I1026 02:11:23.803970   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <disk type='file' device='cdrom'>
	I1026 02:11:23.803987   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/boot2docker.iso'/>
	I1026 02:11:23.803997   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <target dev='hdc' bus='scsi'/>
	I1026 02:11:23.804010   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <readonly/>
	I1026 02:11:23.804022   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </disk>
	I1026 02:11:23.804052   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <disk type='file' device='disk'>
	I1026 02:11:23.804079   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 02:11:23.804100   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/default-k8s-diff-port-661357.rawdisk'/>
	I1026 02:11:23.804117   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <target dev='hda' bus='virtio'/>
	I1026 02:11:23.804128   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </disk>
	I1026 02:11:23.804136   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <interface type='network'>
	I1026 02:11:23.804150   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <source network='mk-default-k8s-diff-port-661357'/>
	I1026 02:11:23.804162   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <model type='virtio'/>
	I1026 02:11:23.804171   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </interface>
	I1026 02:11:23.804180   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <interface type='network'>
	I1026 02:11:23.804192   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <source network='default'/>
	I1026 02:11:23.804202   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <model type='virtio'/>
	I1026 02:11:23.804213   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </interface>
	I1026 02:11:23.804223   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <serial type='pty'>
	I1026 02:11:23.804243   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <target port='0'/>
	I1026 02:11:23.804253   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </serial>
	I1026 02:11:23.804261   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <console type='pty'>
	I1026 02:11:23.804271   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <target type='serial' port='0'/>
	I1026 02:11:23.804281   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </console>
	I1026 02:11:23.804285   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     <rng model='virtio'>
	I1026 02:11:23.804297   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)       <backend model='random'>/dev/random</backend>
	I1026 02:11:23.804307   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     </rng>
	I1026 02:11:23.804328   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     
	I1026 02:11:23.804340   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)     
	I1026 02:11:23.804358   65754 main.go:141] libmachine: (default-k8s-diff-port-661357)   </devices>
	I1026 02:11:23.804368   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) </domain>
	I1026 02:11:23.804379   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) 
	I1026 02:11:23.809649   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:72:4f:da in network default
	I1026 02:11:23.810263   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:23.810283   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring networks are active...
	I1026 02:11:23.810926   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network default is active
	I1026 02:11:23.811271   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network mk-default-k8s-diff-port-661357 is active
	I1026 02:11:23.811870   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Getting domain xml...
	I1026 02:11:23.812631   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Creating domain...
	I1026 02:11:25.048233   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting to get IP...
	I1026 02:11:25.049077   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.049610   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.049634   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:25.049589   65777 retry.go:31] will retry after 246.525656ms: waiting for machine to come up
	I1026 02:11:25.298072   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.298653   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.298687   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:25.298623   65777 retry.go:31] will retry after 304.734388ms: waiting for machine to come up
	I1026 02:11:25.605190   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.605753   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:25.605781   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:25.605728   65777 retry.go:31] will retry after 422.445862ms: waiting for machine to come up
	I1026 02:11:26.029224   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.029860   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.029890   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:26.029820   65777 retry.go:31] will retry after 385.230841ms: waiting for machine to come up
	I1026 02:11:26.416387   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.416873   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.416901   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:26.416821   65777 retry.go:31] will retry after 565.882413ms: waiting for machine to come up
	I1026 02:11:26.984509   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.984980   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:26.985004   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:26.984931   65777 retry.go:31] will retry after 648.429171ms: waiting for machine to come up
	I1026 02:11:27.634806   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:27.635236   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:27.635284   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:27.635222   65777 retry.go:31] will retry after 808.918013ms: waiting for machine to come up
	I1026 02:11:28.445770   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:28.446156   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:28.446178   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:28.446105   65777 retry.go:31] will retry after 971.609187ms: waiting for machine to come up
	I1026 02:11:29.419545   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:29.419993   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:29.420015   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:29.419949   65777 retry.go:31] will retry after 1.126190858s: waiting for machine to come up
	I1026 02:11:30.547616   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:30.548163   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:30.548206   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:30.548139   65777 retry.go:31] will retry after 1.470882486s: waiting for machine to come up
	I1026 02:11:32.020892   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:32.021396   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:32.021412   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:32.021379   65777 retry.go:31] will retry after 2.572488109s: waiting for machine to come up
	I1026 02:11:34.595817   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:34.596256   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:34.596283   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:34.596216   65777 retry.go:31] will retry after 2.470532945s: waiting for machine to come up
	I1026 02:11:37.068336   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:37.068868   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:37.068890   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:37.068814   65777 retry.go:31] will retry after 4.54033917s: waiting for machine to come up
	I1026 02:11:41.610863   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:41.611328   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:11:41.611351   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:11:41.611292   65777 retry.go:31] will retry after 4.223962054s: waiting for machine to come up
	I1026 02:11:45.838416   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:45.838925   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Found IP for machine: 192.168.72.18
	I1026 02:11:45.838958   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserving static IP address...
	I1026 02:11:45.838974   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has current primary IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:45.839348   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find host DHCP lease matching {name: "default-k8s-diff-port-661357", mac: "52:54:00:0c:41:27", ip: "192.168.72.18"} in network mk-default-k8s-diff-port-661357
	I1026 02:11:45.915456   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Getting to WaitForSSH function...
	I1026 02:11:45.915487   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserved static IP address: 192.168.72.18
	I1026 02:11:45.915585   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for SSH to be available...
	I1026 02:11:45.918298   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:45.918721   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:45.918751   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:45.918922   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH client type: external
	I1026 02:11:45.918939   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa (-rw-------)
	I1026 02:11:45.918965   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:11:45.918978   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | About to run SSH command:
	I1026 02:11:45.918988   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | exit 0
	I1026 02:11:46.045276   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | SSH cmd err, output: <nil>: 
	I1026 02:11:46.045527   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) KVM machine creation complete!
	I1026 02:11:46.045881   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetConfigRaw
	I1026 02:11:46.046491   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:46.046733   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:46.046945   65754 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:11:46.046965   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:11:46.048374   65754 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:11:46.048396   65754 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:11:46.048403   65754 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:11:46.048410   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.050879   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.051235   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.051256   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.051557   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.051735   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.051904   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.052043   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.052282   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:46.052515   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:46.052529   65754 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:11:46.160539   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:11:46.160561   65754 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:11:46.160569   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.163601   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.163996   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.164019   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.164243   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.164455   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.164632   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.164765   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.164935   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:46.165093   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:46.165104   65754 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:11:46.273655   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:11:46.273734   65754 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:11:46.273743   65754 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:11:46.273750   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:11:46.274008   65754 buildroot.go:166] provisioning hostname "default-k8s-diff-port-661357"
	I1026 02:11:46.274040   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:11:46.274222   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.276955   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.277312   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.277343   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.277536   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.277733   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.277871   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.277977   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.278143   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:46.278314   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:46.278326   65754 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661357 && echo "default-k8s-diff-port-661357" | sudo tee /etc/hostname
	I1026 02:11:46.399702   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661357
	
	I1026 02:11:46.399730   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.402636   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.402935   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.402963   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.403164   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.403336   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.403503   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.403644   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.403824   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:46.404037   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:46.404054   65754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661357/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:11:46.522234   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:11:46.522268   65754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:11:46.522311   65754 buildroot.go:174] setting up certificates
	I1026 02:11:46.522324   65754 provision.go:84] configureAuth start
	I1026 02:11:46.522338   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:11:46.522639   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:11:46.525128   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.525532   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.525561   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.525741   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.527594   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.527922   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.527958   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.528042   65754 provision.go:143] copyHostCerts
	I1026 02:11:46.528130   65754 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:11:46.528144   65754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:11:46.528227   65754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:11:46.528330   65754 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:11:46.528342   65754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:11:46.528378   65754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:11:46.528449   65754 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:11:46.528460   65754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:11:46.528497   65754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:11:46.528587   65754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661357 san=[127.0.0.1 192.168.72.18 default-k8s-diff-port-661357 localhost minikube]
	I1026 02:11:46.687475   65754 provision.go:177] copyRemoteCerts
	I1026 02:11:46.687531   65754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:11:46.687553   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.690213   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.690531   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.690557   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.690745   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.690920   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.691068   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.691199   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:11:46.775042   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:11:46.797391   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 02:11:46.818861   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 02:11:46.840786   65754 provision.go:87] duration metric: took 318.450712ms to configureAuth
	I1026 02:11:46.840812   65754 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:11:46.841017   65754 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:11:46.841092   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:46.843471   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.843775   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:46.843806   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:46.843962   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:46.844125   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.844314   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:46.844465   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:46.844619   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:46.844843   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:46.844864   65754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:11:47.068325   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:11:47.068346   65754 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:11:47.068354   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetURL
	I1026 02:11:47.069660   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using libvirt version 6000000
	I1026 02:11:47.071647   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.072046   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.072078   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.072266   65754 main.go:141] libmachine: Docker is up and running!
	I1026 02:11:47.072279   65754 main.go:141] libmachine: Reticulating splines...
	I1026 02:11:47.072284   65754 client.go:171] duration metric: took 23.659410754s to LocalClient.Create
	I1026 02:11:47.072306   65754 start.go:167] duration metric: took 23.659472106s to libmachine.API.Create "default-k8s-diff-port-661357"
	I1026 02:11:47.072315   65754 start.go:293] postStartSetup for "default-k8s-diff-port-661357" (driver="kvm2")
	I1026 02:11:47.072324   65754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:11:47.072339   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:47.072572   65754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:11:47.072593   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:47.074686   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.074952   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.074977   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.075100   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:47.075275   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:47.075395   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:47.075508   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:11:47.160115   65754 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:11:47.164212   65754 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:11:47.164236   65754 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:11:47.164311   65754 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:11:47.164394   65754 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:11:47.164498   65754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:11:47.174972   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:11:47.196980   65754 start.go:296] duration metric: took 124.648533ms for postStartSetup
	I1026 02:11:47.197027   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetConfigRaw
	I1026 02:11:47.197652   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:11:47.200403   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.200798   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.200827   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.201088   65754 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:11:47.201262   65754 start.go:128] duration metric: took 23.806967248s to createHost
	I1026 02:11:47.201293   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:47.203741   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.204024   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.204051   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.204196   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:47.204358   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:47.204515   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:47.204647   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:47.204810   65754 main.go:141] libmachine: Using SSH client type: native
	I1026 02:11:47.205022   65754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:11:47.205033   65754 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:11:47.313687   65754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729908707.290966113
	
	I1026 02:11:47.313710   65754 fix.go:216] guest clock: 1729908707.290966113
	I1026 02:11:47.313719   65754 fix.go:229] Guest: 2024-10-26 02:11:47.290966113 +0000 UTC Remote: 2024-10-26 02:11:47.201282702 +0000 UTC m=+23.914432218 (delta=89.683411ms)
	I1026 02:11:47.313743   65754 fix.go:200] guest clock delta is within tolerance: 89.683411ms
	I1026 02:11:47.313751   65754 start.go:83] releasing machines lock for "default-k8s-diff-port-661357", held for 23.919556698s
	I1026 02:11:47.313774   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:47.314062   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:11:47.316855   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.317231   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.317255   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.317378   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:47.317885   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:47.318072   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:11:47.318173   65754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:11:47.318213   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:47.318307   65754 ssh_runner.go:195] Run: cat /version.json
	I1026 02:11:47.318331   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:11:47.320852   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.320957   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.321223   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.321249   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.321276   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:47.321292   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:47.321407   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:47.321540   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:11:47.321638   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:47.321649   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:11:47.321782   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:47.321793   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:11:47.321962   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:11:47.321998   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:11:47.402049   65754 ssh_runner.go:195] Run: systemctl --version
	I1026 02:11:47.434013   65754 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:11:47.588305   65754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:11:47.594314   65754 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:11:47.594388   65754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:11:47.609571   65754 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:11:47.609596   65754 start.go:495] detecting cgroup driver to use...
	I1026 02:11:47.609663   65754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:11:47.624791   65754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:11:47.637658   65754 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:11:47.637721   65754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:11:47.650318   65754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:11:47.663034   65754 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:11:47.773090   65754 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:11:47.901996   65754 docker.go:233] disabling docker service ...
	I1026 02:11:47.902052   65754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:11:47.915443   65754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:11:47.928302   65754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:11:48.077517   65754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:11:48.205545   65754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:11:48.219966   65754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:11:48.238627   65754 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:11:48.238698   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.251925   65754 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:11:48.251998   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.262396   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.272373   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.282837   65754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:11:48.293906   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.304257   65754 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.321026   65754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:11:48.331644   65754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:11:48.341292   65754 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:11:48.341343   65754 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:11:48.354217   65754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:11:48.363892   65754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:11:48.480932   65754 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:11:48.572596   65754 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:11:48.572671   65754 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:11:48.577077   65754 start.go:563] Will wait 60s for crictl version
	I1026 02:11:48.577132   65754 ssh_runner.go:195] Run: which crictl
	I1026 02:11:48.580686   65754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:11:48.625107   65754 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:11:48.625193   65754 ssh_runner.go:195] Run: crio --version
	I1026 02:11:48.652275   65754 ssh_runner.go:195] Run: crio --version
	I1026 02:11:48.684604   65754 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:11:48.686018   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:11:48.688585   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:48.688991   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:11:48.689024   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:11:48.689242   65754 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 02:11:48.693404   65754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:11:48.705454   65754 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:11:48.705602   65754 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:11:48.705676   65754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:11:48.735525   65754 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:11:48.735604   65754 ssh_runner.go:195] Run: which lz4
	I1026 02:11:48.739709   65754 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:11:48.744650   65754 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:11:48.744698   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:11:49.926017   65754 crio.go:462] duration metric: took 1.186330708s to copy over tarball
	I1026 02:11:49.926093   65754 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:11:51.942653   65754 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.016529357s)
	I1026 02:11:51.942687   65754 crio.go:469] duration metric: took 2.01663945s to extract the tarball
	I1026 02:11:51.942697   65754 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:11:51.977660   65754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:11:52.021487   65754 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:11:52.021510   65754 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:11:52.021518   65754 kubeadm.go:934] updating node { 192.168.72.18 8444 v1.31.2 crio true true} ...
	I1026 02:11:52.021638   65754 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-661357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:11:52.021717   65754 ssh_runner.go:195] Run: crio config
	I1026 02:11:52.074852   65754 cni.go:84] Creating CNI manager for ""
	I1026 02:11:52.074871   65754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:11:52.074880   65754 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:11:52.074898   65754 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.18 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661357 NodeName:default-k8s-diff-port-661357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:11:52.075014   65754 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661357"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:11:52.075069   65754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:11:52.084283   65754 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:11:52.084346   65754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:11:52.093971   65754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1026 02:11:52.110141   65754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:11:52.125525   65754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1026 02:11:52.140959   65754 ssh_runner.go:195] Run: grep 192.168.72.18	control-plane.minikube.internal$ /etc/hosts
	I1026 02:11:52.144781   65754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:11:52.155812   65754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:11:52.275665   65754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:11:52.290518   65754 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357 for IP: 192.168.72.18
	I1026 02:11:52.290539   65754 certs.go:194] generating shared ca certs ...
	I1026 02:11:52.290555   65754 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.290731   65754 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:11:52.290794   65754 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:11:52.290809   65754 certs.go:256] generating profile certs ...
	I1026 02:11:52.290899   65754 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.key
	I1026 02:11:52.290921   65754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.crt with IP's: []
	I1026 02:11:52.436228   65754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.crt ...
	I1026 02:11:52.436255   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.crt: {Name:mk2b16d7578037a8f03a97db83f4ca3af7c495fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.436443   65754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.key ...
	I1026 02:11:52.436457   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.key: {Name:mk8cb60a1bb4242129075a9318a456618b0f2775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.436567   65754 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6
	I1026 02:11:52.436603   65754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt.29c0eec6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.18]
	I1026 02:11:52.574573   65754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt.29c0eec6 ...
	I1026 02:11:52.574600   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt.29c0eec6: {Name:mkb87a38ad5dbf31d81f050543c2c667ca121665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.574779   65754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6 ...
	I1026 02:11:52.574797   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6: {Name:mk04932f6c8a88b2bc3b9f3c6fdedad198fae5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.574891   65754 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt.29c0eec6 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt
	I1026 02:11:52.575001   65754 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key
	I1026 02:11:52.575077   65754 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key
	I1026 02:11:52.575098   65754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt with IP's: []
	I1026 02:11:52.703349   65754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt ...
	I1026 02:11:52.703374   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt: {Name:mk89248d285da92d5b060199ee20ab8f1851ee8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.703552   65754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key ...
	I1026 02:11:52.703587   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key: {Name:mkf41cbee5deaf12fe0aa95b71d3c93270231c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:11:52.703842   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:11:52.703889   65754 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:11:52.703904   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:11:52.703935   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:11:52.703964   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:11:52.703998   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:11:52.704055   65754 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:11:52.704710   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:11:52.728918   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:11:52.751845   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:11:52.773691   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:11:52.794910   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 02:11:52.816991   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:11:52.838364   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:11:52.859842   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:11:52.881233   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:11:52.902365   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:11:52.923232   65754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:11:52.946242   65754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:11:52.960991   65754 ssh_runner.go:195] Run: openssl version
	I1026 02:11:52.966150   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:11:52.975943   65754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:11:52.979908   65754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:11:52.979964   65754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:11:52.985528   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:11:52.995654   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:11:53.005944   65754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:11:53.010187   65754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:11:53.010235   65754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:11:53.015529   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:11:53.026088   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:11:53.038052   65754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:11:53.050135   65754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:11:53.050206   65754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:11:53.058794   65754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:11:53.071900   65754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:11:53.076138   65754 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 02:11:53.076194   65754 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:11:53.076273   65754 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:11:53.076313   65754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:11:53.115566   65754 cri.go:89] found id: ""
	I1026 02:11:53.115628   65754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:11:53.125438   65754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:11:53.135442   65754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:11:53.145277   65754 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:11:53.145306   65754 kubeadm.go:157] found existing configuration files:
	
	I1026 02:11:53.145346   65754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1026 02:11:53.155096   65754 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:11:53.155156   65754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:11:53.164295   65754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1026 02:11:53.173102   65754 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:11:53.173169   65754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:11:53.182631   65754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1026 02:11:53.191645   65754 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:11:53.191697   65754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:11:53.201376   65754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1026 02:11:53.210139   65754 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:11:53.210197   65754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:11:53.218886   65754 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:11:53.753616   62745 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1026 02:11:53.753760   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:53.754045   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:11:53.419885   65754 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:11:58.754371   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:11:58.754630   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:12:03.076505   65754 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:12:03.076581   65754 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:12:03.076658   65754 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:12:03.076806   65754 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:12:03.076932   65754 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:12:03.077038   65754 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:12:03.078553   65754 out.go:235]   - Generating certificates and keys ...
	I1026 02:12:03.078639   65754 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:12:03.078727   65754 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:12:03.078827   65754 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 02:12:03.078909   65754 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 02:12:03.078999   65754 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 02:12:03.079071   65754 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 02:12:03.079153   65754 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 02:12:03.079315   65754 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-661357 localhost] and IPs [192.168.72.18 127.0.0.1 ::1]
	I1026 02:12:03.079377   65754 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 02:12:03.079565   65754 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-661357 localhost] and IPs [192.168.72.18 127.0.0.1 ::1]
	I1026 02:12:03.079667   65754 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 02:12:03.079772   65754 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 02:12:03.079841   65754 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 02:12:03.079927   65754 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:12:03.079994   65754 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:12:03.080076   65754 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:12:03.080157   65754 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:12:03.080250   65754 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:12:03.080332   65754 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:12:03.080458   65754 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:12:03.080540   65754 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:12:03.081887   65754 out.go:235]   - Booting up control plane ...
	I1026 02:12:03.081974   65754 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:12:03.082080   65754 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:12:03.082155   65754 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:12:03.082266   65754 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:12:03.082403   65754 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:12:03.082466   65754 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:12:03.082637   65754 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:12:03.082772   65754 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:12:03.082860   65754 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002052388s
	I1026 02:12:03.082952   65754 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:12:03.083032   65754 kubeadm.go:310] [api-check] The API server is healthy after 4.502221494s
	I1026 02:12:03.083154   65754 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 02:12:03.083308   65754 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 02:12:03.083364   65754 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 02:12:03.083526   65754 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-661357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 02:12:03.083586   65754 kubeadm.go:310] [bootstrap-token] Using token: uipnso.zomiujhwx3y2ufuh
	I1026 02:12:03.084899   65754 out.go:235]   - Configuring RBAC rules ...
	I1026 02:12:03.085009   65754 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 02:12:03.085093   65754 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 02:12:03.085216   65754 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 02:12:03.085329   65754 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 02:12:03.085469   65754 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 02:12:03.085577   65754 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 02:12:03.085728   65754 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 02:12:03.085785   65754 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 02:12:03.085824   65754 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 02:12:03.085831   65754 kubeadm.go:310] 
	I1026 02:12:03.085881   65754 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 02:12:03.085887   65754 kubeadm.go:310] 
	I1026 02:12:03.085991   65754 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 02:12:03.086003   65754 kubeadm.go:310] 
	I1026 02:12:03.086034   65754 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 02:12:03.086102   65754 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 02:12:03.086183   65754 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 02:12:03.086197   65754 kubeadm.go:310] 
	I1026 02:12:03.086266   65754 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 02:12:03.086275   65754 kubeadm.go:310] 
	I1026 02:12:03.086342   65754 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 02:12:03.086354   65754 kubeadm.go:310] 
	I1026 02:12:03.086435   65754 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 02:12:03.086553   65754 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 02:12:03.086651   65754 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 02:12:03.086676   65754 kubeadm.go:310] 
	I1026 02:12:03.086776   65754 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 02:12:03.086878   65754 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 02:12:03.086886   65754 kubeadm.go:310] 
	I1026 02:12:03.087009   65754 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token uipnso.zomiujhwx3y2ufuh \
	I1026 02:12:03.087131   65754 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 02:12:03.087166   65754 kubeadm.go:310] 	--control-plane 
	I1026 02:12:03.087185   65754 kubeadm.go:310] 
	I1026 02:12:03.087304   65754 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 02:12:03.087313   65754 kubeadm.go:310] 
	I1026 02:12:03.087429   65754 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token uipnso.zomiujhwx3y2ufuh \
	I1026 02:12:03.087596   65754 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 02:12:03.087633   65754 cni.go:84] Creating CNI manager for ""
	I1026 02:12:03.087643   65754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:12:03.089871   65754 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:12:03.091018   65754 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:12:03.101681   65754 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:12:03.123970   65754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:12:03.124050   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:03.124077   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-661357 minikube.k8s.io/updated_at=2024_10_26T02_12_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=default-k8s-diff-port-661357 minikube.k8s.io/primary=true
	I1026 02:12:03.354377   65754 ops.go:34] apiserver oom_adj: -16
	I1026 02:12:03.354475   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:03.854698   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:04.354616   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:04.854638   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:05.354813   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:05.855148   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:06.355359   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:06.855535   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:07.355533   65754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:12:07.451598   65754 kubeadm.go:1113] duration metric: took 4.327614478s to wait for elevateKubeSystemPrivileges
	I1026 02:12:07.451639   65754 kubeadm.go:394] duration metric: took 14.375449458s to StartCluster
	I1026 02:12:07.451673   65754 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:12:07.451748   65754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:12:07.453670   65754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:12:07.453926   65754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 02:12:07.453961   65754 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:12:07.454034   65754 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661357"
	I1026 02:12:07.454056   65754 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-661357"
	I1026 02:12:07.454080   65754 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:12:07.453939   65754 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:12:07.454111   65754 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661357"
	I1026 02:12:07.454128   65754 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661357"
	I1026 02:12:07.454140   65754 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:12:07.454535   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:07.454565   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:07.454579   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:07.454605   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:07.455699   65754 out.go:177] * Verifying Kubernetes components...
	I1026 02:12:07.457118   65754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:12:07.469983   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I1026 02:12:07.469992   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1026 02:12:07.470491   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:07.470539   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:07.471028   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:12:07.471029   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:12:07.471054   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:07.471066   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:07.471398   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:07.471425   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:07.471581   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:12:07.471982   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:07.472025   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:07.475799   65754 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-661357"
	I1026 02:12:07.475843   65754 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:12:07.476241   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:07.476289   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:07.487926   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I1026 02:12:07.488367   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:07.488868   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:12:07.488895   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:07.489277   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:07.489486   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:12:07.491251   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:12:07.492300   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I1026 02:12:07.492750   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:07.492901   65754 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:12:07.493186   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:12:07.493208   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:07.493596   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:07.494168   65754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:07.494234   65754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:07.494668   65754 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:12:07.494684   65754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:12:07.494702   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:12:07.497834   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:07.498342   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:12:07.498373   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:07.498572   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:12:07.498759   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:12:07.498931   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:12:07.499130   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:12:07.511192   65754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1026 02:12:07.511659   65754 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:07.512185   65754 main.go:141] libmachine: Using API Version  1
	I1026 02:12:07.512208   65754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:07.512549   65754 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:07.512755   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:12:07.514370   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:12:07.516316   65754 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:12:07.516332   65754 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:12:07.516352   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:12:07.519405   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:07.519843   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:12:07.519869   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:07.519997   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:12:07.520188   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:12:07.520329   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:12:07.520507   65754 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:12:07.703903   65754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:12:07.704028   65754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 02:12:07.780033   65754 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:12:07.802865   65754 node_ready.go:49] node "default-k8s-diff-port-661357" has status "Ready":"True"
	I1026 02:12:07.802892   65754 node_ready.go:38] duration metric: took 22.830034ms for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:12:07.802905   65754 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:12:07.811534   65754 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:07.847292   65754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:12:07.878019   65754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:12:08.292973   65754 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1026 02:12:08.293028   65754 main.go:141] libmachine: Making call to close driver server
	I1026 02:12:08.293045   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:12:08.293470   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:12:08.294622   65754 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:12:08.294646   65754 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:12:08.294661   65754 main.go:141] libmachine: Making call to close driver server
	I1026 02:12:08.294669   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:12:08.295072   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:12:08.295113   65754 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:12:08.295136   65754 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:12:08.301512   65754 main.go:141] libmachine: Making call to close driver server
	I1026 02:12:08.301569   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:12:08.301835   65754 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:12:08.301851   65754 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:12:08.799216   65754 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-661357" context rescaled to 1 replicas
	I1026 02:12:08.877707   65754 main.go:141] libmachine: Making call to close driver server
	I1026 02:12:08.877736   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:12:08.878005   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:12:08.878020   65754 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:12:08.878061   65754 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:12:08.878072   65754 main.go:141] libmachine: Making call to close driver server
	I1026 02:12:08.878080   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:12:08.878361   65754 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:12:08.878365   65754 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:12:08.878389   65754 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:12:08.879883   65754 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1026 02:12:08.755338   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:12:08.755604   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:12:08.881159   65754 addons.go:510] duration metric: took 1.427197971s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 02:12:09.816965   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:11.819252   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:14.317344   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:16.317745   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:18.318823   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:19.317362   65754 pod_ready.go:98] pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.18 HostIPs:[{IP:192.168.72.
18}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:12:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:12:08 +0000 UTC,FinishedAt:2024-10-26 02:12:19 +0000 UTC,ContainerID:cri-o://6e7c3fc1399d8f14af40e860d2b1bdfc0415be705e7aa8aaf9e5a04e48f22db4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6e7c3fc1399d8f14af40e860d2b1bdfc0415be705e7aa8aaf9e5a04e48f22db4 Started:0xc001c6f690 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007b0eb0} {Name:kube-api-access-5h9vg MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0007b0ec0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:12:19.317388   65754 pod_ready.go:82] duration metric: took 11.505824024s for pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace to be "Ready" ...
	E1026 02:12:19.317400   65754 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-f9sr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:12:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
2.18 HostIPs:[{IP:192.168.72.18}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:12:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:12:08 +0000 UTC,FinishedAt:2024-10-26 02:12:19 +0000 UTC,ContainerID:cri-o://6e7c3fc1399d8f14af40e860d2b1bdfc0415be705e7aa8aaf9e5a04e48f22db4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6e7c3fc1399d8f14af40e860d2b1bdfc0415be705e7aa8aaf9e5a04e48f22db4 Started:0xc001c6f690 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007b0eb0} {Name:kube-api-access-5h9vg MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0007b0ec0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:12:19.317428   65754 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:21.323438   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:23.324998   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:25.823698   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:28.323038   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:28.755162   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:12:28.755376   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:12:30.323415   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:32.324284   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:34.823503   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:36.824829   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:39.322919   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:41.323323   65754 pod_ready.go:103] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"False"
	I1026 02:12:43.323320   65754 pod_ready.go:93] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.323351   65754 pod_ready.go:82] duration metric: took 24.005911306s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.323365   65754 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.327626   65754 pod_ready.go:93] pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.327649   65754 pod_ready.go:82] duration metric: took 4.276042ms for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.327659   65754 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.331985   65754 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.332007   65754 pod_ready.go:82] duration metric: took 4.340701ms for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.332021   65754 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.335963   65754 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.335982   65754 pod_ready.go:82] duration metric: took 3.952856ms for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.335993   65754 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.339798   65754 pod_ready.go:93] pod "kube-proxy-c947q" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.339815   65754 pod_ready.go:82] duration metric: took 3.815064ms for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.339823   65754 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.721677   65754 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:12:43.721706   65754 pod_ready.go:82] duration metric: took 381.875583ms for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:12:43.721718   65754 pod_ready.go:39] duration metric: took 35.918792423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:12:43.721735   65754 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:12:43.721792   65754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:12:43.736753   65754 api_server.go:72] duration metric: took 36.282638695s to wait for apiserver process to appear ...
	I1026 02:12:43.736779   65754 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:12:43.736797   65754 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:12:43.741045   65754 api_server.go:279] https://192.168.72.18:8444/healthz returned 200:
	ok
	I1026 02:12:43.742197   65754 api_server.go:141] control plane version: v1.31.2
	I1026 02:12:43.742223   65754 api_server.go:131] duration metric: took 5.437299ms to wait for apiserver health ...
	I1026 02:12:43.742233   65754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:12:43.924261   65754 system_pods.go:59] 7 kube-system pods found
	I1026 02:12:43.924293   65754 system_pods.go:61] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running
	I1026 02:12:43.924298   65754 system_pods.go:61] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running
	I1026 02:12:43.924302   65754 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running
	I1026 02:12:43.924306   65754 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running
	I1026 02:12:43.924309   65754 system_pods.go:61] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:12:43.924313   65754 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running
	I1026 02:12:43.924316   65754 system_pods.go:61] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running
	I1026 02:12:43.924322   65754 system_pods.go:74] duration metric: took 182.083463ms to wait for pod list to return data ...
	I1026 02:12:43.924330   65754 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:12:44.121649   65754 default_sa.go:45] found service account: "default"
	I1026 02:12:44.121698   65754 default_sa.go:55] duration metric: took 197.353243ms for default service account to be created ...
	I1026 02:12:44.121706   65754 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:12:44.324279   65754 system_pods.go:86] 7 kube-system pods found
	I1026 02:12:44.324306   65754 system_pods.go:89] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running
	I1026 02:12:44.324311   65754 system_pods.go:89] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running
	I1026 02:12:44.324315   65754 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running
	I1026 02:12:44.324319   65754 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running
	I1026 02:12:44.324331   65754 system_pods.go:89] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:12:44.324335   65754 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running
	I1026 02:12:44.324338   65754 system_pods.go:89] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running
	I1026 02:12:44.324343   65754 system_pods.go:126] duration metric: took 202.632979ms to wait for k8s-apps to be running ...
	I1026 02:12:44.324350   65754 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:12:44.324390   65754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:12:44.338781   65754 system_svc.go:56] duration metric: took 14.422707ms WaitForService to wait for kubelet
	I1026 02:12:44.338813   65754 kubeadm.go:582] duration metric: took 36.884700619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:12:44.338836   65754 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:12:44.521752   65754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:12:44.521782   65754 node_conditions.go:123] node cpu capacity is 2
	I1026 02:12:44.521794   65754 node_conditions.go:105] duration metric: took 182.952419ms to run NodePressure ...
	I1026 02:12:44.521804   65754 start.go:241] waiting for startup goroutines ...
	I1026 02:12:44.521811   65754 start.go:246] waiting for cluster config update ...
	I1026 02:12:44.521820   65754 start.go:255] writing updated cluster config ...
	I1026 02:12:44.522077   65754 ssh_runner.go:195] Run: rm -f paused
	I1026 02:12:44.566477   65754 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:12:44.568477   65754 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-661357" cluster and "default" namespace by default
	I1026 02:13:08.754281   62745 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1026 02:13:08.754546   62745 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1026 02:13:08.754571   62745 kubeadm.go:310] 
	I1026 02:13:08.754618   62745 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1026 02:13:08.754657   62745 kubeadm.go:310] 		timed out waiting for the condition
	I1026 02:13:08.754663   62745 kubeadm.go:310] 
	I1026 02:13:08.754698   62745 kubeadm.go:310] 	This error is likely caused by:
	I1026 02:13:08.754729   62745 kubeadm.go:310] 		- The kubelet is not running
	I1026 02:13:08.754845   62745 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1026 02:13:08.754858   62745 kubeadm.go:310] 
	I1026 02:13:08.755003   62745 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1026 02:13:08.755055   62745 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1026 02:13:08.755098   62745 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1026 02:13:08.755108   62745 kubeadm.go:310] 
	I1026 02:13:08.755234   62745 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1026 02:13:08.755325   62745 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1026 02:13:08.755336   62745 kubeadm.go:310] 
	I1026 02:13:08.755472   62745 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1026 02:13:08.755590   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1026 02:13:08.755717   62745 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1026 02:13:08.755808   62745 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1026 02:13:08.755822   62745 kubeadm.go:310] 
	I1026 02:13:08.756320   62745 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:13:08.756433   62745 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1026 02:13:08.756533   62745 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1026 02:13:08.756597   62745 kubeadm.go:394] duration metric: took 8m7.60525109s to StartCluster
	I1026 02:13:08.756658   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:13:08.756718   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:13:08.801578   62745 cri.go:89] found id: ""
	I1026 02:13:08.801601   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.801611   62745 logs.go:284] No container was found matching "kube-apiserver"
	I1026 02:13:08.801619   62745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:13:08.801676   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:13:08.838217   62745 cri.go:89] found id: ""
	I1026 02:13:08.838243   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.838251   62745 logs.go:284] No container was found matching "etcd"
	I1026 02:13:08.838256   62745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:13:08.838310   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:13:08.874828   62745 cri.go:89] found id: ""
	I1026 02:13:08.874850   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.874858   62745 logs.go:284] No container was found matching "coredns"
	I1026 02:13:08.874864   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:13:08.874910   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:13:08.908817   62745 cri.go:89] found id: ""
	I1026 02:13:08.908849   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.908861   62745 logs.go:284] No container was found matching "kube-scheduler"
	I1026 02:13:08.908868   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:13:08.908929   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:13:08.940204   62745 cri.go:89] found id: ""
	I1026 02:13:08.940232   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.940243   62745 logs.go:284] No container was found matching "kube-proxy"
	I1026 02:13:08.940250   62745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:13:08.940311   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:13:08.972715   62745 cri.go:89] found id: ""
	I1026 02:13:08.972745   62745 logs.go:282] 0 containers: []
	W1026 02:13:08.972755   62745 logs.go:284] No container was found matching "kube-controller-manager"
	I1026 02:13:08.972761   62745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:13:08.972811   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:13:09.005171   62745 cri.go:89] found id: ""
	I1026 02:13:09.005200   62745 logs.go:282] 0 containers: []
	W1026 02:13:09.005211   62745 logs.go:284] No container was found matching "kindnet"
	I1026 02:13:09.005218   62745 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 02:13:09.005290   62745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 02:13:09.037041   62745 cri.go:89] found id: ""
	I1026 02:13:09.037064   62745 logs.go:282] 0 containers: []
	W1026 02:13:09.037072   62745 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1026 02:13:09.037081   62745 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:13:09.037090   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:13:09.145798   62745 logs.go:123] Gathering logs for container status ...
	I1026 02:13:09.145829   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:13:09.188261   62745 logs.go:123] Gathering logs for kubelet ...
	I1026 02:13:09.188294   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:13:09.258267   62745 logs.go:123] Gathering logs for dmesg ...
	I1026 02:13:09.258299   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:13:09.280494   62745 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:13:09.280525   62745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 02:13:09.352147   62745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1026 02:13:09.352184   62745 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1026 02:13:09.352232   62745 out.go:270] * 
	W1026 02:13:09.352290   62745 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:13:09.352306   62745 out.go:270] * 
	W1026 02:13:09.353312   62745 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 02:13:09.356458   62745 out.go:201] 
	W1026 02:13:09.357637   62745 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1026 02:13:09.357681   62745 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1026 02:13:09.357700   62745 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1026 02:13:09.359166   62745 out.go:201] 
	
	
	==> CRI-O <==
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.295785169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908790295727797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=396d92db-48d1-4814-8f4a-ce7d2e73868c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.298686544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee57d40-edb3-4e89-93b5-5d37b4601d6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.298779619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee57d40-edb3-4e89-93b5-5d37b4601d6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.298838845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fee57d40-edb3-4e89-93b5-5d37b4601d6a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.331645810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32ddfb90-b041-4c5d-a62c-f77aaf108e45 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.331735322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32ddfb90-b041-4c5d-a62c-f77aaf108e45 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.332970992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f3b246e-0973-4b37-a6c8-1abbcc137d15 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.333377083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908790333355825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f3b246e-0973-4b37-a6c8-1abbcc137d15 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.333904071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bf95117-9922-4591-836b-30cb76c31918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.333954032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bf95117-9922-4591-836b-30cb76c31918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.333986076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8bf95117-9922-4591-836b-30cb76c31918 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.364354500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2464c7f-877c-4018-b43b-38fc72612c9c name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.364426646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2464c7f-877c-4018-b43b-38fc72612c9c name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.365642528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45860b27-46bb-4b78-8596-64a0fc36551f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.366001842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908790365975408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45860b27-46bb-4b78-8596-64a0fc36551f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.366468068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=589070d4-6e77-4f29-ad49-85040461e3b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.366513822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=589070d4-6e77-4f29-ad49-85040461e3b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.366544676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=589070d4-6e77-4f29-ad49-85040461e3b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.397225437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f9296c7-3e1f-41ef-a77d-5d873d3ab56d name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.397300505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f9296c7-3e1f-41ef-a77d-5d873d3ab56d name=/runtime.v1.RuntimeService/Version
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.398525657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4806f091-3c19-40ca-bd03-319ec79fdbba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.398870253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729908790398843483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4806f091-3c19-40ca-bd03-319ec79fdbba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.399357319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a9a69f2-8df0-4561-9542-c83ba8716e01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.399402587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a9a69f2-8df0-4561-9542-c83ba8716e01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:13:10 old-k8s-version-385716 crio[627]: time="2024-10-26 02:13:10.399463184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2a9a69f2-8df0-4561-9542-c83ba8716e01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050858] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872334] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.849137] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.223439] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.056856] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067296] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.170318] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.142616] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.248491] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.314889] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058322] kauditd_printk_skb: 130 callbacks suppressed
	[Oct26 02:05] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.983702] kauditd_printk_skb: 46 callbacks suppressed
	[Oct26 02:09] systemd-fstab-generator[5115]: Ignoring "noauto" option for root device
	[Oct26 02:11] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.069450] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:13:10 up 8 min,  0 users,  load average: 0.08, 0.13, 0.09
	Linux old-k8s-version-385716 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0004b36c0)
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: goroutine 150 [syscall]:
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: syscall.Syscall6(0xe8, 0xd, 0xc000c11b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000c11b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00045b800, 0x0, 0x0, 0x0)
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000113cc0)
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Oct 26 02:13:08 old-k8s-version-385716 kubelet[5586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Oct 26 02:13:08 old-k8s-version-385716 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 02:13:08 old-k8s-version-385716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 02:13:09 old-k8s-version-385716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 26 02:13:09 old-k8s-version-385716 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 02:13:09 old-k8s-version-385716 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 02:13:09 old-k8s-version-385716 kubelet[5638]: I1026 02:13:09.260214    5638 server.go:416] Version: v1.20.0
	Oct 26 02:13:09 old-k8s-version-385716 kubelet[5638]: I1026 02:13:09.260529    5638 server.go:837] Client rotation is on, will bootstrap in background
	Oct 26 02:13:09 old-k8s-version-385716 kubelet[5638]: I1026 02:13:09.262357    5638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 02:13:09 old-k8s-version-385716 kubelet[5638]: I1026 02:13:09.263304    5638 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 26 02:13:09 old-k8s-version-385716 kubelet[5638]: W1026 02:13:09.263326    5638 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (217.484915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-385716" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (751.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (541.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767480 -n embed-certs-767480
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:18:14.785269541 +0000 UTC m=+5709.553032724
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-767480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-767480 logs -n 25: (1.027645002s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:15:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:15:27.297785   67066 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:15:27.297934   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.297945   67066 out.go:358] Setting ErrFile to fd 2...
	I1026 02:15:27.297952   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.298168   67066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:15:27.298737   67066 out.go:352] Setting JSON to false
	I1026 02:15:27.299667   67066 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7067,"bootTime":1729901860,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:15:27.299764   67066 start.go:139] virtualization: kvm guest
	I1026 02:15:27.302194   67066 out.go:177] * [default-k8s-diff-port-661357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:15:27.303883   67066 notify.go:220] Checking for updates...
	I1026 02:15:27.303910   67066 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:15:27.305362   67066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:15:27.307037   67066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:15:27.308350   67066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:15:27.309738   67066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:15:27.311000   67066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:15:27.312448   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:15:27.312903   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.312969   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.328075   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1026 02:15:27.328420   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.328973   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.328995   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.329389   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.329584   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.329870   67066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:15:27.330179   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.330236   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.345446   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1026 02:15:27.345922   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.346439   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.346465   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.346771   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.346915   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.385240   67066 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:15:27.386493   67066 start.go:297] selected driver: kvm2
	I1026 02:15:27.386506   67066 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.386627   67066 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:15:27.387355   67066 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.387437   67066 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:15:27.402972   67066 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:15:27.403447   67066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:15:27.403480   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:15:27.403538   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:15:27.403573   67066 start.go:340] cluster config:
	{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.403717   67066 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.405745   67066 out.go:177] * Starting "default-k8s-diff-port-661357" primary control-plane node in "default-k8s-diff-port-661357" cluster
	I1026 02:15:27.407319   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:15:27.407362   67066 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:15:27.407375   67066 cache.go:56] Caching tarball of preloaded images
	I1026 02:15:27.407472   67066 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:15:27.407487   67066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:15:27.407612   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:15:27.407850   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:15:27.407893   67066 start.go:364] duration metric: took 24.39µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:15:27.407914   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:15:27.407922   67066 fix.go:54] fixHost starting: 
	I1026 02:15:27.408209   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.408249   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.422977   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1026 02:15:27.423350   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.423824   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.423847   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.424171   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.424338   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.424502   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:15:27.426304   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Running err=<nil>
	W1026 02:15:27.426337   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:15:27.428299   67066 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-661357" VM ...
	I1026 02:15:27.429557   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:15:27.429586   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.429817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:15:27.432629   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:15:27.433157   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:15:27.433540   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433688   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:15:27.433940   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:15:27.434150   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:15:27.434165   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:15:30.317691   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:33.389688   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:39.469675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:42.541741   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:48.625728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:51.693782   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:00.813656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:03.885647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:09.965637   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:13.037626   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:19.117681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:22.189689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:28.273657   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:31.341685   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:37.421654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:40.493714   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:46.573667   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:49.645724   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:55.725675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:58.797640   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:04.877698   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:07.949690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:14.033654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:17.101631   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:23.181650   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:26.253675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:32.333666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:35.405742   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:41.489689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:44.557647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:50.637659   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:53.709622   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:59.789723   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:02.861727   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:08.945680   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:12.013718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	
	
	==> CRI-O <==
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.381356652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cda413ff-1216-48b7-8443-33158b62687a name=/runtime.v1.RuntimeService/Version
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.382284705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f1aa62c-8f53-425b-bc52-292e22c94375 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.382744080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909095382720486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f1aa62c-8f53-425b-bc52-292e22c94375 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.383196387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61138e21-551a-447c-ac8e-5fd71c973f61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.383255374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61138e21-551a-447c-ac8e-5fd71c973f61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.383480496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61138e21-551a-447c-ac8e-5fd71c973f61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.416713794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eef9222c-cffb-4326-8355-aab27573035b name=/runtime.v1.RuntimeService/Version
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.416788880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eef9222c-cffb-4326-8355-aab27573035b name=/runtime.v1.RuntimeService/Version
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.417673570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1379057f-914d-4a22-980c-71045c9e9e19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.418063793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909095418042194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1379057f-914d-4a22-980c-71045c9e9e19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.418935392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188beabf-a518-46d1-9f85-d903c8a66535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.419134728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188beabf-a518-46d1-9f85-d903c8a66535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.419392958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=188beabf-a518-46d1-9f85-d903c8a66535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.419945743Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=37c15118-39cb-402a-ae14-d63f9fdf54ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.420149458Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:cc5c98c7-431f-4722-8c46-33dafff2a3c0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908295611086816,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:04:47.622658826Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cs6fv,Uid:05855bd2-58d5-4d83-b5b4-6b7d28b13957,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908295601421
318,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:04:47.622670161Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f42509a60b4e5e8b40f39b41ce1dde5e58f3c68f5ca4b46d9a68c6fa66ceab,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-c9cwx,Uid:62a837f0-6fdb-418e-a5dd-e3196bb51346,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908291722223126,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-c9cwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62a837f0-6fdb-418e-a5dd-e3196bb51346,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:04:47.
622675354Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&PodSandboxMetadata{Name:kube-proxy-nlwh5,Uid:e83fffc8-a912-4919-b5f6-ccc2745bf855,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908287947711645,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b5f6-ccc2745bf855,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:04:47.622668389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908287928617330,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-10-26T02:04:47.622676414Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-767480,Uid:01f8eb99a7221787feb6623d61642305,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908282122858643,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.84:2379,kubernetes.io/config.hash: 01f8eb99a7221787feb6623d61642305,kubernetes.io/config.seen: 2024-10-26T02:04:41.635329019Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-7674
80,Uid:a69e9479e6f97c36ab4818cbe06a2f90,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908282117680242,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.84:8443,kubernetes.io/config.hash: a69e9479e6f97c36ab4818cbe06a2f90,kubernetes.io/config.seen: 2024-10-26T02:04:41.613425378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-767480,Uid:edc7d9ad67417ee4369ecec880a71cbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908282116476626,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: edc7d9ad67417ee4369ecec880a71cbf,kubernetes.io/config.seen: 2024-10-26T02:04:41.613426574Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-767480,Uid:3e7023c0641eec2819c0f2ce8282631f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908282113966167,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e7023c0641eec2819c0f2ce8282
631f,kubernetes.io/config.seen: 2024-10-26T02:04:41.613421215Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=37c15118-39cb-402a-ae14-d63f9fdf54ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.420671307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a154ebb2-b535-461d-ab6b-cb71eb7f0b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.420931278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a154ebb2-b535-461d-ab6b-cb71eb7f0b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.422013600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a154ebb2-b535-461d-ab6b-cb71eb7f0b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.455907207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16922adc-f657-49dc-923b-8d70d6da69ae name=/runtime.v1.RuntimeService/Version
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.455990037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16922adc-f657-49dc-923b-8d70d6da69ae name=/runtime.v1.RuntimeService/Version
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.457153693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=804adb7e-d1e8-41a3-bff4-709f7546bc5a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.457619352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909095457574279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=804adb7e-d1e8-41a3-bff4-709f7546bc5a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.458141193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d24b076-4501-4011-9734-57eb59f48868 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.458206050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d24b076-4501-4011-9734-57eb59f48868 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:18:15 embed-certs-767480 crio[704]: time="2024-10-26 02:18:15.458399658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d24b076-4501-4011-9734-57eb59f48868 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	971fd135577b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   18d36ab9890b0       storage-provisioner
	cf095996f65d6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ce0853defb95f       busybox
	ad855eaecc8f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   f052f7dbfacb5       coredns-7c65d6cfc9-cs6fv
	8e7db87c8d446       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   6cfba292d641f       kube-proxy-nlwh5
	ab0a492003385       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   18d36ab9890b0       storage-provisioner
	3517cb2fe7b8b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   a3b9f9a26b030       etcd-embed-certs-767480
	4c4a9339a3c46       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   c1605fc5bc9bc       kube-scheduler-embed-certs-767480
	04347160a1b38       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   e2dbf33e6761c       kube-apiserver-embed-certs-767480
	63e4fa14d2052       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   8b9381980bb03       kube-controller-manager-embed-certs-767480
	
	
	==> coredns [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33110 - 51145 "HINFO IN 7420970859103797635.7978935013430623811. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010466713s
	
	
	==> describe nodes <==
	Name:               embed-certs-767480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-767480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=embed-certs-767480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-767480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:15:28 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:15:28 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:15:28 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:15:28 +0000   Sat, 26 Oct 2024 02:04:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.84
	  Hostname:    embed-certs-767480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c65d91fef4086a939fa18be13c3d9
	  System UUID:                088c65d9-1fef-4086-a939-fa18be13c3d9
	  Boot ID:                    a50253e4-a196-4804-81f1-b0e701b06ad4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-cs6fv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-767480                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-767480             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-767480    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-nlwh5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-767480             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-c9cwx               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node embed-certs-767480 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-767480 event: Registered Node embed-certs-767480 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-767480 event: Registered Node embed-certs-767480 in Controller
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050747] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.766135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.847613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.530299] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.316030] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.057919] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061584] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.204845] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.131774] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.267443] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +3.942041] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +2.074432] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.061387] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.502596] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.445492] systemd-fstab-generator[1540]: Ignoring "noauto" option for root device
	[  +5.201315] kauditd_printk_skb: 82 callbacks suppressed
	[Oct26 02:05] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d] <==
	{"level":"info","ts":"2024-10-26T02:05:01.666323Z","caller":"traceutil/trace.go:171","msg":"trace[890333824] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:640; }","duration":"386.839616ms","start":"2024-10-26T02:05:01.279472Z","end":"2024-10-26T02:05:01.666311Z","steps":["trace[890333824] 'read index received'  (duration: 386.72062ms)","trace[890333824] 'applied index is now lower than readState.Index'  (duration: 118.432µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:05:01.666454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.968416ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:05:01.666482Z","caller":"traceutil/trace.go:171","msg":"trace[1263220768] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:605; }","duration":"387.010395ms","start":"2024-10-26T02:05:01.279464Z","end":"2024-10-26T02:05:01.666474Z","steps":["trace[1263220768] 'agreement among raft nodes before linearized reading'  (duration: 386.907225ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:05:01.668954Z","caller":"traceutil/trace.go:171","msg":"trace[1099344624] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"1.029538853s","start":"2024-10-26T02:05:00.639402Z","end":"2024-10-26T02:05:01.668941Z","steps":["trace[1099344624] 'process raft request'  (duration: 1.026828514s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:05:01.669458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:05:00.639390Z","time spent":"1.029618146s","remote":"127.0.0.1:54822","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4326,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-767480\" mod_revision:604 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-767480\" value_size:4258 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-767480\" > >"}
	{"level":"info","ts":"2024-10-26T02:05:01.856923Z","caller":"traceutil/trace.go:171","msg":"trace[379585869] linearizableReadLoop","detail":"{readStateIndex:642; appliedIndex:641; }","duration":"171.910453ms","start":"2024-10-26T02:05:01.684996Z","end":"2024-10-26T02:05:01.856907Z","steps":["trace[379585869] 'read index received'  (duration: 76.405409ms)","trace[379585869] 'applied index is now lower than readState.Index'  (duration: 95.504194ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:05:01.857055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.041589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-767480\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-10-26T02:05:01.857090Z","caller":"traceutil/trace.go:171","msg":"trace[235232383] range","detail":"{range_begin:/registry/minions/embed-certs-767480; range_end:; response_count:1; response_revision:605; }","duration":"172.090763ms","start":"2024-10-26T02:05:01.684993Z","end":"2024-10-26T02:05:01.857083Z","steps":["trace[235232383] 'agreement among raft nodes before linearized reading'  (duration: 171.974415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:53.925133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.65661ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744663890760896049 > lease_revoke:<id:6b7a92c691990dd3>","response":"size:29"}
	{"level":"info","ts":"2024-10-26T02:11:53.925618Z","caller":"traceutil/trace.go:171","msg":"trace[476278036] linearizableReadLoop","detail":"{readStateIndex:1079; appliedIndex:1078; }","duration":"121.98081ms","start":"2024-10-26T02:11:53.803608Z","end":"2024-10-26T02:11:53.925589Z","steps":["trace[476278036] 'read index received'  (duration: 26.849µs)","trace[476278036] 'applied index is now lower than readState.Index'  (duration: 121.95255ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:11:53.925780Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.15062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-26T02:11:53.925853Z","caller":"traceutil/trace.go:171","msg":"trace[802079684] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:957; }","duration":"122.254772ms","start":"2024-10-26T02:11:53.803586Z","end":"2024-10-26T02:11:53.925840Z","steps":["trace[802079684] 'agreement among raft nodes before linearized reading'  (duration: 122.119408ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:11:54.341450Z","caller":"traceutil/trace.go:171","msg":"trace[1397351559] transaction","detail":"{read_only:false; response_revision:958; number_of_response:1; }","duration":"153.599842ms","start":"2024-10-26T02:11:54.187822Z","end":"2024-10-26T02:11:54.341422Z","steps":["trace[1397351559] 'process raft request'  (duration: 153.488336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.865936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.137695ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:54.866078Z","caller":"traceutil/trace.go:171","msg":"trace[74440401] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:958; }","duration":"340.288275ms","start":"2024-10-26T02:11:54.525779Z","end":"2024-10-26T02:11:54.866067Z","steps":["trace[74440401] 'range keys from in-memory index tree'  (duration: 340.125332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.865987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.824671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:54.866360Z","caller":"traceutil/trace.go:171","msg":"trace[1656302138] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:958; }","duration":"336.165231ms","start":"2024-10-26T02:11:54.530139Z","end":"2024-10-26T02:11:54.866304Z","steps":["trace[1656302138] 'range keys from in-memory index tree'  (duration: 335.749215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.866463Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:11:54.530103Z","time spent":"336.306091ms","remote":"127.0.0.1:54822","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-26T02:11:55.255971Z","caller":"traceutil/trace.go:171","msg":"trace[1492592474] transaction","detail":"{read_only:false; response_revision:959; number_of_response:1; }","duration":"168.758257ms","start":"2024-10-26T02:11:55.087174Z","end":"2024-10-26T02:11:55.255932Z","steps":["trace[1492592474] 'process raft request'  (duration: 168.26457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.401677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.460808ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:55.401760Z","caller":"traceutil/trace.go:171","msg":"trace[1436977199] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:959; }","duration":"122.570693ms","start":"2024-10-26T02:11:55.279175Z","end":"2024-10-26T02:11:55.401745Z","steps":["trace[1436977199] 'range keys from in-memory index tree'  (duration: 122.409156ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:11:56.559575Z","caller":"traceutil/trace.go:171","msg":"trace[1515103929] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"210.946763ms","start":"2024-10-26T02:11:56.348608Z","end":"2024-10-26T02:11:56.559554Z","steps":["trace[1515103929] 'process raft request'  (duration: 210.591781ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:14:45.485635Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-10-26T02:14:45.496696Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"10.594203ms","hash":1713281930,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2842624,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-10-26T02:14:45.496746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1713281930,"revision":852,"compact-revision":-1}
	
	
	==> kernel <==
	 02:18:15 up 13 min,  0 users,  load average: 0.42, 0.27, 0.15
	Linux embed-certs-767480 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546] <==
	E1026 02:14:47.659845       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1026 02:14:47.659909       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:14:47.661028       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:14:47.661111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:15:47.662250       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:15:47.662366       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:15:47.662440       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:15:47.662459       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:15:47.663601       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:15:47.663658       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:17:47.664560       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:17:47.664922       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1026 02:17:47.664638       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:17:47.665064       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:17:47.666098       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:17:47.666135       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa] <==
	E1026 02:12:50.194900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:12:50.747001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:13:20.200952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:13:20.754100       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:13:50.207274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:13:50.761166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:14:20.213840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:14:20.768703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:14:50.220960       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:14:50.775315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:15:20.226454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:15:20.782671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:15:28.734823       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-767480"
	E1026 02:15:50.235543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:15:50.790478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:16:11.703019       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="211.398µs"
	E1026 02:16:20.240968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:16:20.797298       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:16:22.702694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="78.12µs"
	E1026 02:16:50.247705       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:16:50.805698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:17:20.254677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:17:20.813353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:17:50.261764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:17:50.821138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:04:48.333001       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:04:48.341591       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.84"]
	E1026 02:04:48.341671       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:04:48.372094       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:04:48.372130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:04:48.372157       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:04:48.374241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:04:48.374590       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:04:48.374614       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:04:48.376026       1 config.go:199] "Starting service config controller"
	I1026 02:04:48.376058       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:04:48.376088       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:04:48.376104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:04:48.376632       1 config.go:328] "Starting node config controller"
	I1026 02:04:48.376657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:04:48.476572       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:04:48.476694       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:04:48.476742       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c] <==
	I1026 02:04:44.648034       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:04:46.631936       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:04:46.631970       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:04:46.632000       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:04:46.632008       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:04:46.653911       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:04:46.653945       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:04:46.656238       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:04:46.657368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:04:46.657436       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:04:46.657469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 02:04:46.758151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:17:04 embed-certs-767480 kubelet[914]: E1026 02:17:04.688175     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:17:11 embed-certs-767480 kubelet[914]: E1026 02:17:11.840449     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909031840010059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:11 embed-certs-767480 kubelet[914]: E1026 02:17:11.840539     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909031840010059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:15 embed-certs-767480 kubelet[914]: E1026 02:17:15.688817     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:17:21 embed-certs-767480 kubelet[914]: E1026 02:17:21.842160     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909041841592122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:21 embed-certs-767480 kubelet[914]: E1026 02:17:21.842470     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909041841592122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:30 embed-certs-767480 kubelet[914]: E1026 02:17:30.691138     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:17:31 embed-certs-767480 kubelet[914]: E1026 02:17:31.844444     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909051844100345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:31 embed-certs-767480 kubelet[914]: E1026 02:17:31.844936     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909051844100345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]: E1026 02:17:41.700689     914 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]: E1026 02:17:41.846627     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909061846302274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:41 embed-certs-767480 kubelet[914]: E1026 02:17:41.846672     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909061846302274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:44 embed-certs-767480 kubelet[914]: E1026 02:17:44.687060     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:17:51 embed-certs-767480 kubelet[914]: E1026 02:17:51.849164     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909071848810695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:51 embed-certs-767480 kubelet[914]: E1026 02:17:51.849462     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909071848810695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:56 embed-certs-767480 kubelet[914]: E1026 02:17:56.687898     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:18:01 embed-certs-767480 kubelet[914]: E1026 02:18:01.851848     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909081851300306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:01 embed-certs-767480 kubelet[914]: E1026 02:18:01.851894     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909081851300306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:10 embed-certs-767480 kubelet[914]: E1026 02:18:10.687553     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:18:11 embed-certs-767480 kubelet[914]: E1026 02:18:11.854036     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909091853371576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:11 embed-certs-767480 kubelet[914]: E1026 02:18:11.854356     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909091853371576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37] <==
	I1026 02:05:18.973568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:05:18.988710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:05:18.988849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:05:36.390392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:05:36.390721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4!
	I1026 02:05:36.393542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43cf9c3f-47ec-401d-97dc-2583e1748a16", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4 became leader
	I1026 02:05:36.492327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4!
	
	
	==> storage-provisioner [ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72] <==
	I1026 02:04:48.177991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:05:18.181887       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-767480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-c9cwx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx: exit status 1 (58.603835ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-c9cwx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (541.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1026 02:10:16.032877   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093148 -n no-preload-093148
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:19:00.301070416 +0000 UTC m=+5755.068833592
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-093148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-093148 logs -n 25: (1.105704102s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:15:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:15:27.297785   67066 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:15:27.297934   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.297945   67066 out.go:358] Setting ErrFile to fd 2...
	I1026 02:15:27.297952   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.298168   67066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:15:27.298737   67066 out.go:352] Setting JSON to false
	I1026 02:15:27.299667   67066 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7067,"bootTime":1729901860,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:15:27.299764   67066 start.go:139] virtualization: kvm guest
	I1026 02:15:27.302194   67066 out.go:177] * [default-k8s-diff-port-661357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:15:27.303883   67066 notify.go:220] Checking for updates...
	I1026 02:15:27.303910   67066 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:15:27.305362   67066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:15:27.307037   67066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:15:27.308350   67066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:15:27.309738   67066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:15:27.311000   67066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:15:27.312448   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:15:27.312903   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.312969   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.328075   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1026 02:15:27.328420   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.328973   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.328995   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.329389   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.329584   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.329870   67066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:15:27.330179   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.330236   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.345446   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1026 02:15:27.345922   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.346439   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.346465   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.346771   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.346915   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.385240   67066 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:15:27.386493   67066 start.go:297] selected driver: kvm2
	I1026 02:15:27.386506   67066 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.386627   67066 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:15:27.387355   67066 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.387437   67066 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:15:27.402972   67066 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:15:27.403447   67066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:15:27.403480   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:15:27.403538   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:15:27.403573   67066 start.go:340] cluster config:
	{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.403717   67066 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.405745   67066 out.go:177] * Starting "default-k8s-diff-port-661357" primary control-plane node in "default-k8s-diff-port-661357" cluster
	I1026 02:15:27.407319   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:15:27.407362   67066 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:15:27.407375   67066 cache.go:56] Caching tarball of preloaded images
	I1026 02:15:27.407472   67066 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:15:27.407487   67066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:15:27.407612   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:15:27.407850   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:15:27.407893   67066 start.go:364] duration metric: took 24.39µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:15:27.407914   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:15:27.407922   67066 fix.go:54] fixHost starting: 
	I1026 02:15:27.408209   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.408249   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.422977   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1026 02:15:27.423350   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.423824   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.423847   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.424171   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.424338   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.424502   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:15:27.426304   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Running err=<nil>
	W1026 02:15:27.426337   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:15:27.428299   67066 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-661357" VM ...
	I1026 02:15:27.429557   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:15:27.429586   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.429817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:15:27.432629   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:15:27.433157   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:15:27.433540   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433688   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:15:27.433940   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:15:27.434150   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:15:27.434165   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:15:30.317691   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:33.389688   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:39.469675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:42.541741   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:48.625728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:51.693782   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:00.813656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:03.885647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:09.965637   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:13.037626   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:19.117681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:22.189689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:28.273657   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:31.341685   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:37.421654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:40.493714   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:46.573667   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:49.645724   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:55.725675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:58.797640   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:04.877698   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:07.949690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:14.033654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:17.101631   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:23.181650   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:26.253675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:32.333666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:35.405742   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:41.489689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:44.557647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:50.637659   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:53.709622   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:59.789723   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:02.861727   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:08.945680   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:12.013718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:18.093693   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:21.169616   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:27.245681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:30.317690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:36.397652   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:39.469689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:45.549661   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:48.621666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:54.705716   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	
	
	==> CRI-O <==
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.918719478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2b5a86b-3b5c-441f-bb54-d9aef4fd20af name=/runtime.v1.RuntimeService/Version
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.921982003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=372ddc4b-3dec-46e0-a2a8-67068cfe3550 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.922357124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909140922333388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=372ddc4b-3dec-46e0-a2a8-67068cfe3550 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.922856701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=944b8dca-62de-46db-918c-af2d64e876b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.922931197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=944b8dca-62de-46db-918c-af2d64e876b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.923153599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=944b8dca-62de-46db-918c-af2d64e876b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.958463058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d6ff683-cd41-421c-abf6-b8881b4edbef name=/runtime.v1.RuntimeService/Version
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.958545180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d6ff683-cd41-421c-abf6-b8881b4edbef name=/runtime.v1.RuntimeService/Version
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.960358607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65eeecf1-ce05-4637-a1d3-f8b21b2dc579 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.960776751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909140960746688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65eeecf1-ce05-4637-a1d3-f8b21b2dc579 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.961355828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40a12f26-7e2d-45eb-b29b-2a1eab78c601 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.961530351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40a12f26-7e2d-45eb-b29b-2a1eab78c601 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.961760139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40a12f26-7e2d-45eb-b29b-2a1eab78c601 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.997558831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81c9b7ec-3afa-4ded-a6e2-755b1fa6de92 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.997637983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81c9b7ec-3afa-4ded-a6e2-755b1fa6de92 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.998955651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f326353-5c18-425f-9f8a-58676a84abed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:00 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.999279758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909140999259629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f326353-5c18-425f-9f8a-58676a84abed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.999789853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e979f588-820a-4a3c-9b83-d97a8a04b0da name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:00.999840384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e979f588-820a-4a3c-9b83-d97a8a04b0da name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.000060922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e979f588-820a-4a3c-9b83-d97a8a04b0da name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.015257693Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=64c96d05-a779-4eb4-96f5-37d396c569e3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.015963464Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&PodSandboxMetadata{Name:busybox,Uid:34789ee5-dad1-4115-b92d-39279ef3891c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908344120185686,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.136165370Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-4bxg2,Uid:6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17299083440251708
02,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.136167288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:847a98f0f2386927ecfa624ee37d8a7da77bb5265e755dab745fa46974a6c032,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-kwrk2,Uid:25c9f457-5112-4b5b-8a28-6cb290f5ebdf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908342226243348,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-kwrk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c9f457-5112-4b5b-8a28-6cb290f5ebdf,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.1
36163062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908336451660420,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-26T02:05:36.136164350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&PodSandboxMetadata{Name:kube-proxy-z7nrz,Uid:f9041b89-8769-4652-8d39-0982091ffc7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908336443517809,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc7c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-10-26T02:05:36.136160777Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-093148,Uid:ccf022757e3de98e7b0dc46aec18ce11,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332644610155,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.9:2379,kubernetes.io/config.hash: ccf022757e3de98e7b0dc46aec18ce11,kubernetes.io/config.seen: 2024-10-26T02:05:32.154325128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-093148,Ui
d:b11f585fa774eedc4c512138bd241fad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332643071563,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.9:8443,kubernetes.io/config.hash: b11f585fa774eedc4c512138bd241fad,kubernetes.io/config.seen: 2024-10-26T02:05:32.135986289Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-093148,Uid:dc33dc3fa197cefb0ec44ae046e226aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332641458665,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc33dc3fa197cefb0ec44ae046e226aa,kubernetes.io/config.seen: 2024-10-26T02:05:32.135991504Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-093148,Uid:0606e52df31155c2078e142a34e4ce34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332640187433,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0606e52df31155c2078e142a34e4ce34,kubern
etes.io/config.seen: 2024-10-26T02:05:32.135990506Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=64c96d05-a779-4eb4-96f5-37d396c569e3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.016577970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c433179d-d1c2-4e85-9eef-cba5f73cc77e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.016628902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c433179d-d1c2-4e85-9eef-cba5f73cc77e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:19:01 no-preload-093148 crio[707]: time="2024-10-26 02:19:01.016837432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c433179d-d1c2-4e85-9eef-cba5f73cc77e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff836e5f3f5bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   1c708f1cd9cb5       storage-provisioner
	f9d23a8f36bb1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   27b69c8ae1d86       busybox
	c7f75959e8826       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   fafa599cf7d01       coredns-7c65d6cfc9-4bxg2
	ae236de084984       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   1c708f1cd9cb5       storage-provisioner
	8c15e7d230254       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   8da5a57e4ecd0       kube-proxy-z7nrz
	ab6ce981ea7a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   0decf26c87177       kube-scheduler-no-preload-093148
	1bcc48b027240       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   b1729a0b3728d       etcd-no-preload-093148
	dad51df9ec4db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   1ef21846a6114       kube-controller-manager-no-preload-093148
	e712dd7959873       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   7bf40987a87ff       kube-apiserver-no-preload-093148
	
	
	==> coredns [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60061 - 55559 "HINFO IN 1778746441980941812.3268527977647942046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013346236s
	
	
	==> describe nodes <==
	Name:               no-preload-093148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-093148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=no-preload-093148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_56_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-093148
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:18:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:16:19 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:16:19 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:16:19 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:16:19 +0000   Sat, 26 Oct 2024 02:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.9
	  Hostname:    no-preload-093148
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 386f8ff219bc4aa1a29c9a5b22a14fb6
	  System UUID:                386f8ff2-19bc-4aa1-a29c-9a5b22a14fb6
	  Boot ID:                    935ea570-396a-4311-bfbd-b623b11605f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-4bxg2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-093148                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-093148             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-093148    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-z7nrz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-093148             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-kwrk2              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-093148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-093148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-093148 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-093148 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-093148 event: Registered Node no-preload-093148 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-093148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-093148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-093148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-093148 event: Registered Node no-preload-093148 in Controller
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057401] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039826] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct26 02:05] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.031897] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.468018] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.019089] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.063764] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051693] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.214288] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.117206] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.265646] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +15.853517] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.067913] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.737164] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +3.705239] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.325506] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +3.330017] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.137800] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01] <==
	{"level":"info","ts":"2024-10-26T02:05:34.841765Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9fd1bd3e853e6d0b","local-member-attributes":"{Name:no-preload-093148 ClientURLs:[https://192.168.50.9:2379]}","request-path":"/0/members/9fd1bd3e853e6d0b/attributes","cluster-id":"c73435d0ce2db908","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-26T02:05:34.844334Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T02:05:34.846038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T02:05:34.846460Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-26T02:05:34.848073Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-26T02:05:34.847543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.9:2379"}
	{"level":"info","ts":"2024-10-26T02:05:34.850233Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T02:05:34.851560Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-26T02:11:54.041025Z","caller":"traceutil/trace.go:171","msg":"trace[1591687086] transaction","detail":"{read_only:false; response_revision:915; number_of_response:1; }","duration":"117.968319ms","start":"2024-10-26T02:11:53.923010Z","end":"2024-10-26T02:11:54.040979Z","steps":["trace[1591687086] 'process raft request'  (duration: 117.856044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.326790Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.150061ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:54.326892Z","caller":"traceutil/trace.go:171","msg":"trace[1101601089] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:915; }","duration":"158.32233ms","start":"2024-10-26T02:11:54.168552Z","end":"2024-10-26T02:11:54.326875Z","steps":["trace[1101601089] 'range keys from in-memory index tree'  (duration: 158.128888ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:11:54.326978Z","caller":"traceutil/trace.go:171","msg":"trace[1837626419] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"184.46878ms","start":"2024-10-26T02:11:54.142504Z","end":"2024-10-26T02:11:54.326972Z","steps":["trace[1837626419] 'process raft request'  (duration: 182.388738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.718699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.964586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:54.718878Z","caller":"traceutil/trace.go:171","msg":"trace[1394143706] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:916; }","duration":"234.209863ms","start":"2024-10-26T02:11:54.484651Z","end":"2024-10-26T02:11:54.718861Z","steps":["trace[1394143706] 'range keys from in-memory index tree'  (duration: 233.906417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:54.718735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.982589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-10-26T02:11:54.719223Z","caller":"traceutil/trace.go:171","msg":"trace[1728007480] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:916; }","duration":"273.469981ms","start":"2024-10-26T02:11:54.445739Z","end":"2024-10-26T02:11:54.719209Z","steps":["trace[1728007480] 'range keys from in-memory index tree'  (duration: 272.861074ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:11:54.849280Z","caller":"traceutil/trace.go:171","msg":"trace[1238558795] transaction","detail":"{read_only:false; response_revision:917; number_of_response:1; }","duration":"125.740581ms","start":"2024-10-26T02:11:54.723522Z","end":"2024-10-26T02:11:54.849262Z","steps":["trace[1238558795] 'process raft request'  (duration: 125.63539ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.385496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.467501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:55.385583Z","caller":"traceutil/trace.go:171","msg":"trace[896981015] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:917; }","duration":"216.567194ms","start":"2024-10-26T02:11:55.168998Z","end":"2024-10-26T02:11:55.385565Z","steps":["trace[896981015] 'range keys from in-memory index tree'  (duration: 216.453553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.385693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.602512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:55.385737Z","caller":"traceutil/trace.go:171","msg":"trace[69896837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:917; }","duration":"368.654909ms","start":"2024-10-26T02:11:55.017074Z","end":"2024-10-26T02:11:55.385729Z","steps":["trace[69896837] 'range keys from in-memory index tree'  (duration: 368.564295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.385804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:11:55.017040Z","time spent":"368.718908ms","remote":"127.0.0.1:51036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-26T02:15:34.894212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-10-26T02:15:34.904853Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.178828ms","hash":3803687416,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2650112,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-26T02:15:34.904922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3803687416,"revision":849,"compact-revision":-1}
	
	
	==> kernel <==
	 02:19:01 up 14 min,  0 users,  load average: 0.09, 0.11, 0.09
	Linux no-preload-093148 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e] <==
	E1026 02:15:37.096657       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1026 02:15:37.096746       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:15:37.097999       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:15:37.098126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:16:37.098148       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 02:16:37.098219       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:16:37.098285       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 02:16:37.098338       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:16:37.100270       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:16:37.100296       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:18:37.101234       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:18:37.101315       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1026 02:18:37.101379       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:18:37.101484       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:18:37.102457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:18:37.102632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454] <==
	E1026 02:13:39.758020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:13:40.219014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:14:09.763373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:14:10.225644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:14:39.768879       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:14:40.232813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:15:09.775377       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:15:10.240569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:15:39.781838       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:15:40.248113       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:16:09.787745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:16:10.257047       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:16:19.245544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-093148"
	E1026 02:16:39.794678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:16:40.265556       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:16:44.228975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="228.883µs"
	I1026 02:16:55.225481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="245.932µs"
	E1026 02:17:09.801003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:17:10.272831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:17:39.807245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:17:40.280237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:18:09.812677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:18:10.287451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:18:39.819567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:18:40.294886       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:05:36.797358       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:05:36.813727       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.9"]
	E1026 02:05:36.813817       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:05:36.849045       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:05:36.849095       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:05:36.849131       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:05:36.851686       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:05:36.852086       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:05:36.852135       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:05:36.854451       1 config.go:199] "Starting service config controller"
	I1026 02:05:36.854885       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:05:36.855136       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:05:36.855171       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:05:36.855990       1 config.go:328] "Starting node config controller"
	I1026 02:05:36.856021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:05:36.955356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:05:36.955377       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:05:36.956154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be] <==
	I1026 02:05:34.028748       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:05:36.009240       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:05:36.009277       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:05:36.009287       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:05:36.009294       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:05:36.070947       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:05:36.070987       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:05:36.077709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:05:36.077817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:05:36.077847       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:05:36.077861       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W1026 02:05:36.085660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 02:05:36.085713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 02:05:36.085762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 02:05:36.085789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 02:05:36.085838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 02:05:36.085862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1026 02:05:36.178561       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:17:44 no-preload-093148 kubelet[1425]: E1026 02:17:44.211583    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:17:52 no-preload-093148 kubelet[1425]: E1026 02:17:52.359566    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909072358949504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:52 no-preload-093148 kubelet[1425]: E1026 02:17:52.359587    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909072358949504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:17:59 no-preload-093148 kubelet[1425]: E1026 02:17:59.211482    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:18:02 no-preload-093148 kubelet[1425]: E1026 02:18:02.361212    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909082360965609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:02 no-preload-093148 kubelet[1425]: E1026 02:18:02.361244    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909082360965609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:12 no-preload-093148 kubelet[1425]: E1026 02:18:12.367636    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909092362475803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:12 no-preload-093148 kubelet[1425]: E1026 02:18:12.367969    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909092362475803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:13 no-preload-093148 kubelet[1425]: E1026 02:18:13.211119    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:18:22 no-preload-093148 kubelet[1425]: E1026 02:18:22.371287    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909102370456490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:22 no-preload-093148 kubelet[1425]: E1026 02:18:22.371329    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909102370456490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:26 no-preload-093148 kubelet[1425]: E1026 02:18:26.211745    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]: E1026 02:18:32.226466    1425 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]: E1026 02:18:32.372641    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909112372237805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:32 no-preload-093148 kubelet[1425]: E1026 02:18:32.372678    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909112372237805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:38 no-preload-093148 kubelet[1425]: E1026 02:18:38.210755    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:18:42 no-preload-093148 kubelet[1425]: E1026 02:18:42.374323    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909122373854884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:42 no-preload-093148 kubelet[1425]: E1026 02:18:42.374364    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909122373854884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:52 no-preload-093148 kubelet[1425]: E1026 02:18:52.376786    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909132376306744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:52 no-preload-093148 kubelet[1425]: E1026 02:18:52.377197    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909132376306744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:18:53 no-preload-093148 kubelet[1425]: E1026 02:18:53.211079    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	
	
	==> storage-provisioner [ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45] <==
	I1026 02:05:36.658676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:06:06.663704       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193] <==
	I1026 02:06:07.466563       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:06:07.478270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:06:07.478345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:06:24.880873       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:06:24.881066       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7!
	I1026 02:06:24.882761       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87a9f819-85f4-4c7c-9e1f-5c5d894f2048", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7 became leader
	I1026 02:06:24.981749       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-093148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kwrk2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2: exit status 1 (60.928407ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kwrk2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-661357 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-661357 --alsologtostderr -v=3: exit status 82 (2m0.48808227s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-661357"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 02:12:55.850291   66350 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:12:55.850415   66350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:12:55.850427   66350 out.go:358] Setting ErrFile to fd 2...
	I1026 02:12:55.850434   66350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:12:55.850601   66350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:12:55.850857   66350 out.go:352] Setting JSON to false
	I1026 02:12:55.850954   66350 mustload.go:65] Loading cluster: default-k8s-diff-port-661357
	I1026 02:12:55.851295   66350 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:12:55.851402   66350 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:12:55.851584   66350 mustload.go:65] Loading cluster: default-k8s-diff-port-661357
	I1026 02:12:55.851717   66350 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:12:55.851750   66350 stop.go:39] StopHost: default-k8s-diff-port-661357
	I1026 02:12:55.852188   66350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:12:55.852244   66350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:12:55.868702   66350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I1026 02:12:55.869193   66350 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:12:55.869773   66350 main.go:141] libmachine: Using API Version  1
	I1026 02:12:55.869793   66350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:12:55.870158   66350 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:12:55.873071   66350 out.go:177] * Stopping node "default-k8s-diff-port-661357"  ...
	I1026 02:12:55.874224   66350 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1026 02:12:55.874265   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:12:55.874498   66350 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1026 02:12:55.874528   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:12:55.877550   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:55.877944   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:12:55.877974   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:12:55.878118   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:12:55.878294   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:12:55.878438   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:12:55.878601   66350 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:12:55.988610   66350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1026 02:12:56.027596   66350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1026 02:12:56.084733   66350 main.go:141] libmachine: Stopping "default-k8s-diff-port-661357"...
	I1026 02:12:56.084767   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:12:56.086503   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Stop
	I1026 02:12:56.090326   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 0/120
	I1026 02:12:57.091546   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 1/120
	I1026 02:12:58.092979   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 2/120
	I1026 02:12:59.094210   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 3/120
	I1026 02:13:00.095972   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 4/120
	I1026 02:13:01.098352   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 5/120
	I1026 02:13:02.100030   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 6/120
	I1026 02:13:03.101556   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 7/120
	I1026 02:13:04.104044   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 8/120
	I1026 02:13:05.105226   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 9/120
	I1026 02:13:06.107520   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 10/120
	I1026 02:13:07.109282   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 11/120
	I1026 02:13:08.110639   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 12/120
	I1026 02:13:09.112336   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 13/120
	I1026 02:13:10.113504   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 14/120
	I1026 02:13:11.114858   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 15/120
	I1026 02:13:12.116413   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 16/120
	I1026 02:13:13.117779   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 17/120
	I1026 02:13:14.119197   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 18/120
	I1026 02:13:15.121100   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 19/120
	I1026 02:13:16.123231   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 20/120
	I1026 02:13:17.124712   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 21/120
	I1026 02:13:18.126125   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 22/120
	I1026 02:13:19.127896   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 23/120
	I1026 02:13:20.129149   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 24/120
	I1026 02:13:21.130854   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 25/120
	I1026 02:13:22.133331   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 26/120
	I1026 02:13:23.134617   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 27/120
	I1026 02:13:24.136014   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 28/120
	I1026 02:13:25.137152   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 29/120
	I1026 02:13:26.139013   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 30/120
	I1026 02:13:27.140303   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 31/120
	I1026 02:13:28.141698   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 32/120
	I1026 02:13:29.143077   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 33/120
	I1026 02:13:30.144314   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 34/120
	I1026 02:13:31.146316   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 35/120
	I1026 02:13:32.147548   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 36/120
	I1026 02:13:33.148876   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 37/120
	I1026 02:13:34.150156   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 38/120
	I1026 02:13:35.151839   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 39/120
	I1026 02:13:36.153987   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 40/120
	I1026 02:13:37.156090   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 41/120
	I1026 02:13:38.157267   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 42/120
	I1026 02:13:39.158557   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 43/120
	I1026 02:13:40.160047   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 44/120
	I1026 02:13:41.161699   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 45/120
	I1026 02:13:42.162935   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 46/120
	I1026 02:13:43.164342   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 47/120
	I1026 02:13:44.165737   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 48/120
	I1026 02:13:45.167850   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 49/120
	I1026 02:13:46.169656   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 50/120
	I1026 02:13:47.171000   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 51/120
	I1026 02:13:48.172325   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 52/120
	I1026 02:13:49.174358   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 53/120
	I1026 02:13:50.175838   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 54/120
	I1026 02:13:51.178080   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 55/120
	I1026 02:13:52.179887   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 56/120
	I1026 02:13:53.181579   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 57/120
	I1026 02:13:54.182844   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 58/120
	I1026 02:13:55.184268   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 59/120
	I1026 02:13:56.186322   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 60/120
	I1026 02:13:57.187680   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 61/120
	I1026 02:13:58.188981   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 62/120
	I1026 02:13:59.190221   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 63/120
	I1026 02:14:00.191809   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 64/120
	I1026 02:14:01.193742   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 65/120
	I1026 02:14:02.195804   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 66/120
	I1026 02:14:03.197219   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 67/120
	I1026 02:14:04.198578   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 68/120
	I1026 02:14:05.200167   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 69/120
	I1026 02:14:06.202233   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 70/120
	I1026 02:14:07.203881   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 71/120
	I1026 02:14:08.205150   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 72/120
	I1026 02:14:09.206548   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 73/120
	I1026 02:14:10.207808   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 74/120
	I1026 02:14:11.210068   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 75/120
	I1026 02:14:12.211950   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 76/120
	I1026 02:14:13.213332   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 77/120
	I1026 02:14:14.214744   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 78/120
	I1026 02:14:15.216176   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 79/120
	I1026 02:14:16.218542   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 80/120
	I1026 02:14:17.219836   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 81/120
	I1026 02:14:18.221262   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 82/120
	I1026 02:14:19.222685   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 83/120
	I1026 02:14:20.223971   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 84/120
	I1026 02:14:21.225775   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 85/120
	I1026 02:14:22.227893   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 86/120
	I1026 02:14:23.229354   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 87/120
	I1026 02:14:24.230673   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 88/120
	I1026 02:14:25.231939   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 89/120
	I1026 02:14:26.234175   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 90/120
	I1026 02:14:27.235554   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 91/120
	I1026 02:14:28.236993   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 92/120
	I1026 02:14:29.238292   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 93/120
	I1026 02:14:30.239520   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 94/120
	I1026 02:14:31.241437   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 95/120
	I1026 02:14:32.242723   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 96/120
	I1026 02:14:33.244199   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 97/120
	I1026 02:14:34.245607   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 98/120
	I1026 02:14:35.247829   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 99/120
	I1026 02:14:36.250141   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 100/120
	I1026 02:14:37.251414   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 101/120
	I1026 02:14:38.252653   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 102/120
	I1026 02:14:39.254145   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 103/120
	I1026 02:14:40.256189   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 104/120
	I1026 02:14:41.257858   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 105/120
	I1026 02:14:42.259895   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 106/120
	I1026 02:14:43.261041   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 107/120
	I1026 02:14:44.262488   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 108/120
	I1026 02:14:45.264804   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 109/120
	I1026 02:14:46.267388   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 110/120
	I1026 02:14:47.268901   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 111/120
	I1026 02:14:48.270184   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 112/120
	I1026 02:14:49.271567   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 113/120
	I1026 02:14:50.273318   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 114/120
	I1026 02:14:51.275152   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 115/120
	I1026 02:14:52.276534   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 116/120
	I1026 02:14:53.277987   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 117/120
	I1026 02:14:54.279323   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 118/120
	I1026 02:14:55.280879   66350 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for machine to stop 119/120
	I1026 02:14:56.281855   66350 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1026 02:14:56.281927   66350 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1026 02:14:56.284118   66350 out.go:201] 
	W1026 02:14:56.285387   66350 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1026 02:14:56.285404   66350 out.go:270] * 
	* 
	W1026 02:14:56.288163   66350 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 02:14:56.289397   66350 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-661357 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357: exit status 3 (18.570770482s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 02:15:14.861877   66857 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host
	E1026 02:15:14.861897   66857 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-661357" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:13:52.961550   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:16:37.285125   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:18:52.961446   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:19:40.359820   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:21:37.284551   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (225.173837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-385716" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (215.174217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-385716 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:15:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:15:27.297785   67066 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:15:27.297934   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.297945   67066 out.go:358] Setting ErrFile to fd 2...
	I1026 02:15:27.297952   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.298168   67066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:15:27.298737   67066 out.go:352] Setting JSON to false
	I1026 02:15:27.299667   67066 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7067,"bootTime":1729901860,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:15:27.299764   67066 start.go:139] virtualization: kvm guest
	I1026 02:15:27.302194   67066 out.go:177] * [default-k8s-diff-port-661357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:15:27.303883   67066 notify.go:220] Checking for updates...
	I1026 02:15:27.303910   67066 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:15:27.305362   67066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:15:27.307037   67066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:15:27.308350   67066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:15:27.309738   67066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:15:27.311000   67066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:15:27.312448   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:15:27.312903   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.312969   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.328075   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1026 02:15:27.328420   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.328973   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.328995   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.329389   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.329584   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.329870   67066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:15:27.330179   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.330236   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.345446   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1026 02:15:27.345922   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.346439   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.346465   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.346771   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.346915   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.385240   67066 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:15:27.386493   67066 start.go:297] selected driver: kvm2
	I1026 02:15:27.386506   67066 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.386627   67066 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:15:27.387355   67066 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.387437   67066 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:15:27.402972   67066 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:15:27.403447   67066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:15:27.403480   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:15:27.403538   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:15:27.403573   67066 start.go:340] cluster config:
	{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.403717   67066 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.405745   67066 out.go:177] * Starting "default-k8s-diff-port-661357" primary control-plane node in "default-k8s-diff-port-661357" cluster
	I1026 02:15:27.407319   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:15:27.407362   67066 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:15:27.407375   67066 cache.go:56] Caching tarball of preloaded images
	I1026 02:15:27.407472   67066 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:15:27.407487   67066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:15:27.407612   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:15:27.407850   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:15:27.407893   67066 start.go:364] duration metric: took 24.39µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:15:27.407914   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:15:27.407922   67066 fix.go:54] fixHost starting: 
	I1026 02:15:27.408209   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.408249   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.422977   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1026 02:15:27.423350   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.423824   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.423847   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.424171   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.424338   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.424502   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:15:27.426304   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Running err=<nil>
	W1026 02:15:27.426337   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:15:27.428299   67066 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-661357" VM ...
	I1026 02:15:27.429557   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:15:27.429586   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.429817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:15:27.432629   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:15:27.433157   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:15:27.433540   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433688   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:15:27.433940   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:15:27.434150   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:15:27.434165   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:15:30.317691   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:33.389688   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:39.469675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:42.541741   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:48.625728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:51.693782   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:00.813656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:03.885647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:09.965637   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:13.037626   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:19.117681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:22.189689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:28.273657   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:31.341685   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:37.421654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:40.493714   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:46.573667   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:49.645724   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:55.725675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:58.797640   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:04.877698   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:07.949690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:14.033654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:17.101631   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:23.181650   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:26.253675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:32.333666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:35.405742   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:41.489689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:44.557647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:50.637659   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:53.709622   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:59.789723   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:02.861727   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:08.945680   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:12.013718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:18.093693   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:21.169616   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:27.245681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:30.317690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:36.397652   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:39.469689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:45.549661   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:48.621666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:54.705716   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:57.773712   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:03.853656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:06.925672   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:13.005700   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:16.077672   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:22.161718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:25.229728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:31.313674   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:34.381761   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:40.461651   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:43.533728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:49.613664   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:52.689645   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:58.765677   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:20:01.837755   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:20:04.838824   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:20:04.838856   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:04.839160   67066 buildroot.go:166] provisioning hostname "default-k8s-diff-port-661357"
	I1026 02:20:04.839194   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:04.839412   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:04.840850   67066 machine.go:96] duration metric: took 4m37.411273522s to provisionDockerMachine
	I1026 02:20:04.840889   67066 fix.go:56] duration metric: took 4m37.432968576s for fixHost
	I1026 02:20:04.840895   67066 start.go:83] releasing machines lock for "default-k8s-diff-port-661357", held for 4m37.432989897s
	W1026 02:20:04.840909   67066 start.go:714] error starting host: provision: host is not running
	W1026 02:20:04.840976   67066 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1026 02:20:04.840985   67066 start.go:729] Will try again in 5 seconds ...
	I1026 02:20:09.842689   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:20:09.842791   67066 start.go:364] duration metric: took 60.747µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:20:09.842816   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:20:09.842831   67066 fix.go:54] fixHost starting: 
	I1026 02:20:09.843132   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:09.843155   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:09.858340   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I1026 02:20:09.858814   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:09.859276   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:09.859298   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:09.859609   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:09.859793   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:09.859963   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:09.861770   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Stopped err=<nil>
	I1026 02:20:09.861794   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	W1026 02:20:09.861945   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:20:09.864154   67066 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-661357" ...
	I1026 02:20:09.865351   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Start
	I1026 02:20:09.865594   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring networks are active...
	I1026 02:20:09.866340   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network default is active
	I1026 02:20:09.866708   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network mk-default-k8s-diff-port-661357 is active
	I1026 02:20:09.867181   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Getting domain xml...
	I1026 02:20:09.867849   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Creating domain...
	I1026 02:20:11.157180   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting to get IP...
	I1026 02:20:11.158004   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.158420   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.158479   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.158401   68753 retry.go:31] will retry after 205.32589ms: waiting for machine to come up
	I1026 02:20:11.366215   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.366787   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.366816   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.366743   68753 retry.go:31] will retry after 372.887432ms: waiting for machine to come up
	I1026 02:20:11.741620   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.742196   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.742217   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.742154   68753 retry.go:31] will retry after 309.993426ms: waiting for machine to come up
	I1026 02:20:12.053939   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.054367   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.054396   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:12.054333   68753 retry.go:31] will retry after 391.94553ms: waiting for machine to come up
	I1026 02:20:12.447938   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.448418   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.448442   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:12.448370   68753 retry.go:31] will retry after 658.550669ms: waiting for machine to come up
	I1026 02:20:13.108487   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.109103   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.109129   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:13.109035   68753 retry.go:31] will retry after 709.02963ms: waiting for machine to come up
	I1026 02:20:13.819859   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.820380   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.820410   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:13.820328   68753 retry.go:31] will retry after 845.655125ms: waiting for machine to come up
	I1026 02:20:14.667789   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:14.668287   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:14.668315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:14.668232   68753 retry.go:31] will retry after 1.007484364s: waiting for machine to come up
	I1026 02:20:15.677769   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:15.678274   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:15.678305   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:15.678183   68753 retry.go:31] will retry after 1.820092111s: waiting for machine to come up
	I1026 02:20:17.501043   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:17.501462   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:17.501497   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:17.501456   68753 retry.go:31] will retry after 1.646280238s: waiting for machine to come up
	I1026 02:20:19.150297   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:19.150860   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:19.150887   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:19.150823   68753 retry.go:31] will retry after 2.698451428s: waiting for machine to come up
	I1026 02:20:21.850608   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:21.851011   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:21.851042   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:21.850970   68753 retry.go:31] will retry after 2.282943942s: waiting for machine to come up
	I1026 02:20:24.136310   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:24.136784   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:24.136813   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:24.136736   68753 retry.go:31] will retry after 3.403699394s: waiting for machine to come up
	I1026 02:20:27.543572   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.544171   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Found IP for machine: 192.168.72.18
	I1026 02:20:27.544200   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserving static IP address...
	I1026 02:20:27.544216   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has current primary IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.544612   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-661357", mac: "52:54:00:0c:41:27", ip: "192.168.72.18"} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.544633   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserved static IP address: 192.168.72.18
	I1026 02:20:27.544645   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | skip adding static IP to network mk-default-k8s-diff-port-661357 - found existing host DHCP lease matching {name: "default-k8s-diff-port-661357", mac: "52:54:00:0c:41:27", ip: "192.168.72.18"}
	I1026 02:20:27.544656   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Getting to WaitForSSH function...
	I1026 02:20:27.544667   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for SSH to be available...
	I1026 02:20:27.547163   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.547543   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.547574   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.547780   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH client type: external
	I1026 02:20:27.547816   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa (-rw-------)
	I1026 02:20:27.547858   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:20:27.547876   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | About to run SSH command:
	I1026 02:20:27.547890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | exit 0
	I1026 02:20:27.669305   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | SSH cmd err, output: <nil>: 
	I1026 02:20:27.669693   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetConfigRaw
	I1026 02:20:27.670363   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:27.673029   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.673439   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.673468   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.673720   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:20:27.673952   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:20:27.673973   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:27.674200   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.676638   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.676982   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.677013   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.677123   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.677299   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.677481   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.677616   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.677769   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.677965   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.677977   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:20:27.777578   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:20:27.777607   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:27.777854   67066 buildroot.go:166] provisioning hostname "default-k8s-diff-port-661357"
	I1026 02:20:27.777884   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:27.778079   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.780842   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.781223   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.781247   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.781467   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.781649   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.781786   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.781898   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.782054   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.782256   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.782281   67066 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661357 && echo "default-k8s-diff-port-661357" | sudo tee /etc/hostname
	I1026 02:20:27.896677   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661357
	
	I1026 02:20:27.896708   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.899493   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.899870   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.899936   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.900124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.900328   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.900496   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.900663   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.900904   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.901120   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.901137   67066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661357/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:20:28.011530   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:20:28.011565   67066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:20:28.011596   67066 buildroot.go:174] setting up certificates
	I1026 02:20:28.011606   67066 provision.go:84] configureAuth start
	I1026 02:20:28.011614   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:28.011917   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:28.014919   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.015327   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.015353   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.015542   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.017631   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.017987   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.018015   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.018230   67066 provision.go:143] copyHostCerts
	I1026 02:20:28.018310   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:20:28.018328   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:20:28.018405   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:20:28.018513   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:20:28.018523   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:20:28.018562   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:20:28.018668   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:20:28.018681   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:20:28.018718   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:20:28.018784   67066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661357 san=[127.0.0.1 192.168.72.18 default-k8s-diff-port-661357 localhost minikube]
	I1026 02:20:28.283116   67066 provision.go:177] copyRemoteCerts
	I1026 02:20:28.283179   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:20:28.283203   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.285996   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.286331   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.286355   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.286505   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.286714   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.286858   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.286960   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.367395   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:20:28.391783   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 02:20:28.414947   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:20:28.440556   67066 provision.go:87] duration metric: took 428.936668ms to configureAuth
	I1026 02:20:28.440591   67066 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:20:28.440783   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:20:28.440865   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.443825   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.444235   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.444281   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.444450   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.444683   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.444890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.445056   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.445252   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:28.445484   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:28.445513   67066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:20:28.657448   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:20:28.657478   67066 machine.go:96] duration metric: took 983.512613ms to provisionDockerMachine
	I1026 02:20:28.657490   67066 start.go:293] postStartSetup for "default-k8s-diff-port-661357" (driver="kvm2")
	I1026 02:20:28.657501   67066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:20:28.657522   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.657861   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:20:28.657890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.660571   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.660926   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.660959   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.661118   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.661298   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.661472   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.661620   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.740276   67066 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:20:28.744331   67066 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:20:28.744356   67066 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:20:28.744454   67066 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:20:28.744564   67066 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:20:28.744699   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:20:28.754074   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:20:28.776812   67066 start.go:296] duration metric: took 119.305158ms for postStartSetup
	I1026 02:20:28.776859   67066 fix.go:56] duration metric: took 18.93402724s for fixHost
	I1026 02:20:28.776882   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.779953   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.780312   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.780340   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.780524   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.780741   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.780886   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.781041   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.781233   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:28.781510   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:28.781527   67066 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:20:28.882528   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909228.857250366
	
	I1026 02:20:28.882548   67066 fix.go:216] guest clock: 1729909228.857250366
	I1026 02:20:28.882556   67066 fix.go:229] Guest: 2024-10-26 02:20:28.857250366 +0000 UTC Remote: 2024-10-26 02:20:28.776864275 +0000 UTC m=+301.517684501 (delta=80.386091ms)
	I1026 02:20:28.882576   67066 fix.go:200] guest clock delta is within tolerance: 80.386091ms
	I1026 02:20:28.882581   67066 start.go:83] releasing machines lock for "default-k8s-diff-port-661357", held for 19.03978033s
	I1026 02:20:28.882597   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.882848   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:28.885339   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.885691   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.885721   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.885871   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886321   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886498   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886579   67066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:20:28.886634   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.886776   67066 ssh_runner.go:195] Run: cat /version.json
	I1026 02:20:28.886803   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.889458   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.889630   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.889839   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.889865   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.890022   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.890032   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.890056   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.890242   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.890243   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.890401   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.890466   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.890581   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.890673   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.890982   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:29.001770   67066 ssh_runner.go:195] Run: systemctl --version
	I1026 02:20:29.007670   67066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:20:29.150271   67066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:20:29.156252   67066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:20:29.156336   67066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:20:29.172267   67066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:20:29.172292   67066 start.go:495] detecting cgroup driver to use...
	I1026 02:20:29.172352   67066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:20:29.188769   67066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:20:29.203250   67066 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:20:29.203306   67066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:20:29.217222   67066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:20:29.230972   67066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:20:29.346698   67066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:20:29.520440   67066 docker.go:233] disabling docker service ...
	I1026 02:20:29.520532   67066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:20:29.534512   67066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:20:29.547618   67066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:20:29.674170   67066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:20:29.790614   67066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:20:29.805113   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:20:29.823385   67066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:20:29.823459   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.834548   67066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:20:29.834612   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.845635   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.855964   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.867741   67066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:20:29.878595   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.889257   67066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.906208   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.917146   67066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:20:29.926950   67066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:20:29.927020   67066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:20:29.941373   67066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:20:29.951206   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:30.066163   67066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:20:30.155026   67066 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:20:30.155112   67066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:20:30.159790   67066 start.go:563] Will wait 60s for crictl version
	I1026 02:20:30.159849   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:20:30.163600   67066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:20:30.203002   67066 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:20:30.203078   67066 ssh_runner.go:195] Run: crio --version
	I1026 02:20:30.229655   67066 ssh_runner.go:195] Run: crio --version
	I1026 02:20:30.260019   67066 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:20:30.261218   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:30.264497   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:30.264886   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:30.264907   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:30.265160   67066 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 02:20:30.269055   67066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:20:30.281497   67066 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:20:30.281649   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:20:30.281743   67066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:20:30.317981   67066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:20:30.318061   67066 ssh_runner.go:195] Run: which lz4
	I1026 02:20:30.321759   67066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:20:30.325850   67066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:20:30.325896   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:20:31.651772   67066 crio.go:462] duration metric: took 1.330041951s to copy over tarball
	I1026 02:20:31.651888   67066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:20:33.804858   67066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152934864s)
	I1026 02:20:33.804901   67066 crio.go:469] duration metric: took 2.153098897s to extract the tarball
	I1026 02:20:33.804912   67066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:20:33.841380   67066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:20:33.884198   67066 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:20:33.884234   67066 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:20:33.884244   67066 kubeadm.go:934] updating node { 192.168.72.18 8444 v1.31.2 crio true true} ...
	I1026 02:20:33.884372   67066 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-661357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:20:33.884455   67066 ssh_runner.go:195] Run: crio config
	I1026 02:20:33.938946   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:20:33.938971   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:20:33.938983   67066 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:20:33.939013   67066 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.18 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661357 NodeName:default-k8s-diff-port-661357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:20:33.939158   67066 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661357"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:20:33.939231   67066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:20:33.949891   67066 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:20:33.949958   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:20:33.959789   67066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1026 02:20:33.976623   67066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:20:33.991359   67066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1026 02:20:34.007135   67066 ssh_runner.go:195] Run: grep 192.168.72.18	control-plane.minikube.internal$ /etc/hosts
	I1026 02:20:34.010559   67066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:20:34.021707   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:34.150232   67066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:20:34.177824   67066 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357 for IP: 192.168.72.18
	I1026 02:20:34.177849   67066 certs.go:194] generating shared ca certs ...
	I1026 02:20:34.177869   67066 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:34.178034   67066 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:20:34.178097   67066 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:20:34.178112   67066 certs.go:256] generating profile certs ...
	I1026 02:20:34.178241   67066 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.key
	I1026 02:20:34.178341   67066 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6
	I1026 02:20:34.178401   67066 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key
	I1026 02:20:34.178613   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:20:34.178665   67066 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:20:34.178677   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:20:34.178709   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:20:34.178747   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:20:34.178780   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:20:34.178839   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:20:34.179773   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:20:34.228350   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:20:34.274677   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:20:34.312372   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:20:34.343042   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 02:20:34.369490   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:20:34.392203   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:20:34.414716   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:20:34.439171   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:20:34.462507   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:20:34.484198   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:20:34.506399   67066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:20:34.521925   67066 ssh_runner.go:195] Run: openssl version
	I1026 02:20:34.527762   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:20:34.537980   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.542334   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.542393   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.548210   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:20:34.558367   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:20:34.568179   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.572155   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.572207   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.577337   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:20:34.586783   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:20:34.596539   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.600705   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.600751   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.606006   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:20:34.615835   67066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:20:34.619908   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:20:34.625291   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:20:34.630936   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:20:34.636410   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:20:34.641881   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:20:34.648366   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:20:34.653688   67066 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:20:34.653770   67066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:20:34.653819   67066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:20:34.692272   67066 cri.go:89] found id: ""
	I1026 02:20:34.692362   67066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:20:34.702791   67066 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:20:34.702811   67066 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:20:34.702858   67066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:20:34.712118   67066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:20:34.713520   67066 kubeconfig.go:125] found "default-k8s-diff-port-661357" server: "https://192.168.72.18:8444"
	I1026 02:20:34.716689   67066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:20:34.725334   67066 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.18
	I1026 02:20:34.725362   67066 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:20:34.725374   67066 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:20:34.725440   67066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:20:34.757678   67066 cri.go:89] found id: ""
	I1026 02:20:34.757745   67066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:20:34.772453   67066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:20:34.781104   67066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:20:34.781125   67066 kubeadm.go:157] found existing configuration files:
	
	I1026 02:20:34.781173   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1026 02:20:34.789342   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:20:34.789396   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:20:34.797951   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1026 02:20:34.805987   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:20:34.806057   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:20:34.814807   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1026 02:20:34.822626   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:20:34.822693   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:20:34.830967   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1026 02:20:34.839120   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:20:34.839177   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:20:34.847796   67066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:20:34.856342   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:34.956523   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:35.768693   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:35.968797   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:36.040536   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:36.130180   67066 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:20:36.130300   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:36.630495   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:37.130625   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:37.630728   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:38.130795   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:38.151595   67066 api_server.go:72] duration metric: took 2.02141435s to wait for apiserver process to appear ...
	I1026 02:20:38.151637   67066 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:20:38.151662   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:38.152162   67066 api_server.go:269] stopped: https://192.168.72.18:8444/healthz: Get "https://192.168.72.18:8444/healthz": dial tcp 192.168.72.18:8444: connect: connection refused
	I1026 02:20:38.651789   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:40.769681   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:20:40.769741   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:20:40.769766   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:40.810385   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:20:40.810422   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:20:41.152677   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:41.164322   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:20:41.164353   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:20:41.651791   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:41.658110   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:20:41.658146   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:20:42.151728   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:42.163110   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 200:
	ok
	I1026 02:20:42.170287   67066 api_server.go:141] control plane version: v1.31.2
	I1026 02:20:42.170314   67066 api_server.go:131] duration metric: took 4.018669008s to wait for apiserver health ...
	I1026 02:20:42.170324   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:20:42.170332   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:20:42.172451   67066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:20:42.173984   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:20:42.185616   67066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:20:42.223096   67066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:20:42.234788   67066 system_pods.go:59] 8 kube-system pods found
	I1026 02:20:42.234847   67066 system_pods.go:61] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 02:20:42.234863   67066 system_pods.go:61] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 02:20:42.234878   67066 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 02:20:42.234892   67066 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 02:20:42.234905   67066 system_pods.go:61] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:20:42.234914   67066 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 02:20:42.234924   67066 system_pods.go:61] "metrics-server-6867b74b74-jkl5g" [023bd779-83b7-42ef-893d-f7ab70f08ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:20:42.234940   67066 system_pods.go:61] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 02:20:42.234952   67066 system_pods.go:74] duration metric: took 11.834154ms to wait for pod list to return data ...
	I1026 02:20:42.234964   67066 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:20:42.240100   67066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:20:42.240138   67066 node_conditions.go:123] node cpu capacity is 2
	I1026 02:20:42.240153   67066 node_conditions.go:105] duration metric: took 5.181139ms to run NodePressure ...
	I1026 02:20:42.240175   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:42.505336   67066 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1026 02:20:42.510487   67066 kubeadm.go:739] kubelet initialised
	I1026 02:20:42.510509   67066 kubeadm.go:740] duration metric: took 5.142371ms waiting for restarted kubelet to initialise ...
	I1026 02:20:42.510517   67066 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:42.515070   67066 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.519704   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.519733   67066 pod_ready.go:82] duration metric: took 4.641295ms for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.519745   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.519754   67066 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.523349   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.523371   67066 pod_ready.go:82] duration metric: took 3.607793ms for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.523389   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.523404   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.527098   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.527122   67066 pod_ready.go:82] duration metric: took 3.706328ms for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.527134   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.527147   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.626144   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.626175   67066 pod_ready.go:82] duration metric: took 99.014479ms for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.626187   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.626194   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.026245   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-proxy-c947q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.026277   67066 pod_ready.go:82] duration metric: took 400.075235ms for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.026289   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-proxy-c947q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.026298   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.426236   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.426268   67066 pod_ready.go:82] duration metric: took 399.958763ms for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.426285   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.426295   67066 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.827259   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.827290   67066 pod_ready.go:82] duration metric: took 400.983426ms for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.827305   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.827316   67066 pod_ready.go:39] duration metric: took 1.316791104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:43.827333   67066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:20:43.839420   67066 ops.go:34] apiserver oom_adj: -16
	I1026 02:20:43.839452   67066 kubeadm.go:597] duration metric: took 9.136633662s to restartPrimaryControlPlane
	I1026 02:20:43.839468   67066 kubeadm.go:394] duration metric: took 9.185783947s to StartCluster
	I1026 02:20:43.839492   67066 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:43.839591   67066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:20:43.842166   67066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:43.842434   67066 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:20:43.842534   67066 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:20:43.842640   67066 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661357"
	I1026 02:20:43.842660   67066 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-661357"
	I1026 02:20:43.842667   67066 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661357"
	W1026 02:20:43.842677   67066 addons.go:243] addon storage-provisioner should already be in state true
	I1026 02:20:43.842693   67066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661357"
	I1026 02:20:43.842689   67066 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-661357"
	I1026 02:20:43.842708   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.842713   67066 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-661357"
	W1026 02:20:43.842721   67066 addons.go:243] addon metrics-server should already be in state true
	I1026 02:20:43.842737   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:20:43.842749   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.843146   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843163   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843166   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843183   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.843188   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.843200   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.844170   67066 out.go:177] * Verifying Kubernetes components...
	I1026 02:20:43.845572   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:43.859423   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I1026 02:20:43.859946   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.860482   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.860508   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.860900   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.861533   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.861580   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.863282   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I1026 02:20:43.863431   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I1026 02:20:43.863891   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.863911   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.864365   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.864385   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.864389   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.864407   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.864769   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.864788   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.864985   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.865314   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.865353   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.868025   67066 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-661357"
	W1026 02:20:43.868041   67066 addons.go:243] addon default-storageclass should already be in state true
	I1026 02:20:43.868063   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.868321   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.868357   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.877922   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I1026 02:20:43.878359   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.878855   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.878868   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.879138   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.879294   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.880925   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.882414   67066 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 02:20:43.883480   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 02:20:43.883498   67066 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 02:20:43.883516   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.886539   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.886936   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.886958   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.887173   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.887326   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.887469   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.887593   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:43.889753   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I1026 02:20:43.890268   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.890810   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.890840   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.891162   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.891350   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.892902   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.894549   67066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:20:43.895782   67066 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:20:43.895797   67066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:20:43.895814   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.899634   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.900029   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.900047   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.900244   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.900368   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.900505   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.900633   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:43.907056   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I1026 02:20:43.907446   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.908340   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.908359   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.908692   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.910127   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.910158   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.926987   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I1026 02:20:43.927446   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.929170   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.929188   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.929754   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.930383   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.932008   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.932199   67066 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:20:43.932215   67066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:20:43.932233   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.934609   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.934877   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.934900   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.935066   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.935213   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.935335   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.935519   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:44.079965   67066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:20:44.101438   67066 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:20:44.157295   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:20:44.253190   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 02:20:44.253216   67066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 02:20:44.263508   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:20:44.318176   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 02:20:44.318219   67066 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 02:20:44.398217   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:20:44.398239   67066 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 02:20:44.491239   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:20:44.623927   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.623955   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.624363   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.624383   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:44.624396   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.624405   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.624622   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.624639   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:44.624642   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:44.631038   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.631055   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.631301   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.631320   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.235238   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.235265   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.235592   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.235618   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.235628   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.235627   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.235637   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.235905   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.235947   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.235966   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.406802   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.406826   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.407169   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.407188   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.407197   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.407204   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.407434   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.407449   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.407460   67066 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-661357"
	I1026 02:20:45.407477   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.409386   67066 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1026 02:20:45.410709   67066 addons.go:510] duration metric: took 1.568186199s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1026 02:20:46.105327   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:48.105495   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:50.105708   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:51.105506   67066 node_ready.go:49] node "default-k8s-diff-port-661357" has status "Ready":"True"
	I1026 02:20:51.105529   67066 node_ready.go:38] duration metric: took 7.004055158s for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:20:51.105538   67066 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:51.110758   67066 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:51.116405   67066 pod_ready.go:93] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:51.116427   67066 pod_ready.go:82] duration metric: took 5.642161ms for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:51.116440   67066 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.124461   67066 pod_ready.go:93] pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.124489   67066 pod_ready.go:82] duration metric: took 2.008040829s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.124503   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.130609   67066 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.130634   67066 pod_ready.go:82] duration metric: took 6.121774ms for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.130646   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.134438   67066 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.134457   67066 pod_ready.go:82] duration metric: took 3.804731ms for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.134466   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.137983   67066 pod_ready.go:93] pod "kube-proxy-c947q" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.137999   67066 pod_ready.go:82] duration metric: took 3.52735ms for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.138008   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:54.705479   67066 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:54.705508   67066 pod_ready.go:82] duration metric: took 1.567492895s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:54.705524   67066 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:56.713045   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:20:59.211741   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:01.713041   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:03.713999   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:06.212171   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:08.212292   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:10.212832   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:12.213756   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:14.711683   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:16.711769   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:18.712192   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:20.714206   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:23.211409   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:25.212766   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:27.712538   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:30.213972   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:32.712343   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:35.212266   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:37.712294   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:39.712378   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:42.211896   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:44.212804   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:46.712568   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:49.211905   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:51.212618   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:53.712161   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:55.713140   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:57.714672   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:00.212114   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:02.212796   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:04.212878   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:06.716498   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.670814161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909331670796623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78b805cd-f31b-443d-a28a-b5443d4cce5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.671267424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1142ee9-d222-4fd1-a5ab-193e610d0627 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.671314472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1142ee9-d222-4fd1-a5ab-193e610d0627 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.671357641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1142ee9-d222-4fd1-a5ab-193e610d0627 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.702668702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16be4575-424b-48d5-a34c-0834cd7ed520 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.702738406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16be4575-424b-48d5-a34c-0834cd7ed520 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.704085971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cde7862d-6983-4a1b-b4aa-f7942fb6e89b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.704521067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909331704494889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cde7862d-6983-4a1b-b4aa-f7942fb6e89b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.705057607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=446dc2b9-85b4-4649-acea-43fc7207bda1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.705108772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=446dc2b9-85b4-4649-acea-43fc7207bda1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.705192524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=446dc2b9-85b4-4649-acea-43fc7207bda1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.734343886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00ece6f1-57b5-4917-a603-a34ea05e2d2b name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.734420337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00ece6f1-57b5-4917-a603-a34ea05e2d2b name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.735753842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b7b28e4-90e6-40e2-a02e-5fc3956ff0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.736209936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909331736117809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b7b28e4-90e6-40e2-a02e-5fc3956ff0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.736917440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3be86932-cab1-4780-afec-71074b883dd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.736996765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3be86932-cab1-4780-afec-71074b883dd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.737048291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3be86932-cab1-4780-afec-71074b883dd8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.766716014Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=183633bc-3325-40fd-8588-046ee61be784 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.766792623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=183633bc-3325-40fd-8588-046ee61be784 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.767741126Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9992165-0a95-4196-af12-ffb99c66e111 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.768092232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909331768072188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9992165-0a95-4196-af12-ffb99c66e111 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.768794071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ee6054-278c-4010-b274-cbbcd5cfcca4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.768852441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ee6054-278c-4010-b274-cbbcd5cfcca4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:22:11 old-k8s-version-385716 crio[627]: time="2024-10-26 02:22:11.768895469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=47ee6054-278c-4010-b274-cbbcd5cfcca4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050858] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872334] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.849137] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.223439] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.056856] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067296] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.170318] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.142616] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.248491] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.314889] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058322] kauditd_printk_skb: 130 callbacks suppressed
	[Oct26 02:05] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.983702] kauditd_printk_skb: 46 callbacks suppressed
	[Oct26 02:09] systemd-fstab-generator[5115]: Ignoring "noauto" option for root device
	[Oct26 02:11] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.069450] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:22:11 up 17 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux old-k8s-version-385716 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00072bdd0)
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: goroutine 154 [select]:
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000999ef0, 0x4f0ac20, 0xc000974820, 0x1, 0xc00009e0c0)
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254e00, 0xc00009e0c0)
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008c0fa0, 0xc000972d20)
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 26 02:22:09 old-k8s-version-385716 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 02:22:09 old-k8s-version-385716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 02:22:09 old-k8s-version-385716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 26 02:22:09 old-k8s-version-385716 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 02:22:09 old-k8s-version-385716 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6589]: I1026 02:22:09.736343    6589 server.go:416] Version: v1.20.0
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6589]: I1026 02:22:09.736660    6589 server.go:837] Client rotation is on, will bootstrap in background
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6589]: I1026 02:22:09.738608    6589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6589]: W1026 02:22:09.739494    6589 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 26 02:22:09 old-k8s-version-385716 kubelet[6589]: I1026 02:22:09.739859    6589 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (225.291849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-385716" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357: exit status 3 (3.167597552s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 02:15:18.029748   66938 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host
	E1026 02:15:18.029771   66938 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-661357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-661357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157979365s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-661357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357: exit status 3 (3.062081032s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 02:15:27.249773   67019 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host
	E1026 02:15:27.249797   67019 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.18:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-661357" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (485.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767480 -n embed-certs-767480
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:26:20.006415019 +0000 UTC m=+6194.774178195
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-767480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-767480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.856µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-767480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-767480 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-767480 logs -n 25: (1.134300456s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC | 26 Oct 24 02:25 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:24 UTC | 26 Oct 24 02:24 UTC |
	| start   | -p newest-cni-274222 --memory=2200 --alsologtostderr   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:24 UTC | 26 Oct 24 02:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC | 26 Oct 24 02:25 UTC |
	| start   | -p auto-761631 --memory=3072                           | auto-761631                  | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-274222             | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC | 26 Oct 24 02:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-274222                                   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC | 26 Oct 24 02:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-274222                  | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC | 26 Oct 24 02:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-274222 --memory=2200 --alsologtostderr   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:25 UTC | 26 Oct 24 02:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-274222 image list                           | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC | 26 Oct 24 02:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-274222                                   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC | 26 Oct 24 02:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-274222                                   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC | 26 Oct 24 02:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-274222                                   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC | 26 Oct 24 02:26 UTC |
	| delete  | -p newest-cni-274222                                   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC | 26 Oct 24 02:26 UTC |
	| start   | -p kindnet-761631                                      | kindnet-761631               | jenkins | v1.34.0 | 26 Oct 24 02:26 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:26:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:26:19.808996   71797 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:26:19.809126   71797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:26:19.809137   71797 out.go:358] Setting ErrFile to fd 2...
	I1026 02:26:19.809145   71797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:26:19.809411   71797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:26:19.810143   71797 out.go:352] Setting JSON to false
	I1026 02:26:19.811452   71797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7720,"bootTime":1729901860,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:26:19.811558   71797 start.go:139] virtualization: kvm guest
	I1026 02:26:19.813044   71797 out.go:177] * [kindnet-761631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:26:19.814763   71797 notify.go:220] Checking for updates...
	I1026 02:26:19.815228   71797 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:26:19.816662   71797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:26:19.818030   71797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:26:19.819226   71797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:26:19.820414   71797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:26:19.821686   71797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:26:19.823625   71797 config.go:182] Loaded profile config "auto-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:26:19.823733   71797 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:26:19.823831   71797 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:26:19.823975   71797 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:26:19.862888   71797 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 02:26:19.864144   71797 start.go:297] selected driver: kvm2
	I1026 02:26:19.864162   71797 start.go:901] validating driver "kvm2" against <nil>
	I1026 02:26:19.864174   71797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:26:19.865019   71797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:26:19.865099   71797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:26:19.881548   71797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:26:19.881588   71797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 02:26:19.881888   71797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:26:19.881921   71797 cni.go:84] Creating CNI manager for "kindnet"
	I1026 02:26:19.881930   71797 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 02:26:19.881980   71797 start.go:340] cluster config:
	{Name:kindnet-761631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:26:19.882088   71797 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:26:19.884445   71797 out.go:177] * Starting "kindnet-761631" primary control-plane node in "kindnet-761631" cluster
	I1026 02:26:19.885832   71797 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:26:19.885870   71797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:26:19.885877   71797 cache.go:56] Caching tarball of preloaded images
	I1026 02:26:19.885954   71797 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:26:19.885965   71797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:26:19.886045   71797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/config.json ...
	I1026 02:26:19.886062   71797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/config.json: {Name:mk584f18127adbe94cfe31a1be196e082c6cb015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:26:19.886178   71797 start.go:360] acquireMachinesLock for kindnet-761631: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:26:19.886204   71797 start.go:364] duration metric: took 14.442µs to acquireMachinesLock for "kindnet-761631"
	I1026 02:26:19.886220   71797 start.go:93] Provisioning new machine with config: &{Name:kindnet-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:kindnet-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:26:19.886277   71797 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.617177505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909580617147796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ae384bf-4690-40aa-9dcb-d15e9d3b33d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.617875226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbadcd16-e5e6-48b2-a632-19f79bbfe5f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.618111682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbadcd16-e5e6-48b2-a632-19f79bbfe5f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.618306628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbadcd16-e5e6-48b2-a632-19f79bbfe5f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.654281752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a727b5ac-9dc6-44c2-a986-f758c83cae99 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.654361943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a727b5ac-9dc6-44c2-a986-f758c83cae99 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.655228538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c77b8b5f-19f4-4567-9851-2b858d82acd4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.655828373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909580655784881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c77b8b5f-19f4-4567-9851-2b858d82acd4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.656275685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64a6d1c7-ae42-48e4-9933-d6412d714908 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.656345638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64a6d1c7-ae42-48e4-9933-d6412d714908 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.656600450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64a6d1c7-ae42-48e4-9933-d6412d714908 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.692585609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3542098-3881-4265-a1d7-3bc85ccbf154 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.692698666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3542098-3881-4265-a1d7-3bc85ccbf154 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.694136322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4328edd5-e088-48a6-a869-c7b800570ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.694689710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909580694664364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4328edd5-e088-48a6-a869-c7b800570ee2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.695235266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c4daf5f-9f73-4754-9371-daea61330cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.695286148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c4daf5f-9f73-4754-9371-daea61330cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.695484002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c4daf5f-9f73-4754-9371-daea61330cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.727469527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f00af667-38b4-4f04-bd13-1f95475c5cfb name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.727612229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f00af667-38b4-4f04-bd13-1f95475c5cfb name=/runtime.v1.RuntimeService/Version
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.728977173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6093f87-c604-4064-b858-d70392e99c46 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.729406881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909580729379613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6093f87-c604-4064-b858-d70392e99c46 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.730115005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0596d35-68c8-4583-955b-8755eb39c101 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.730171282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0596d35-68c8-4583-955b-8755eb39c101 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:26:20 embed-certs-767480 crio[704]: time="2024-10-26 02:26:20.730376594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908318876194884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf095996f65d61435391825f447491d8b99ce45ea83ad6147d969d7a2eb83801,PodSandboxId:ce0853defb95f51622fcb3e5ad2e2496afe980b2865900bc8308c8a3b008444b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908299053882013,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc5c98c7-431f-4722-8c46-33dafff2a3c0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237,PodSandboxId:f052f7dbfacb5f2fe6ec584b5265dcdba252a33acedbea28c7c1eef174938c1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908295936977511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cs6fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05855bd2-58d5-4d83-b5b4-6b7d28b13957,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b,PodSandboxId:6cfba292d641f5bd6c55979d2e5acfbc399c884393af507f6c6305752d2c8f11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908288174967446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlwh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e83fffc8-a912-4919-b
5f6-ccc2745bf855,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72,PodSandboxId:18d36ab9890b07e5b3c327831d6849e75d926e7f5a045922c36067dd472cc6a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908288043691767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a3b8d-f8fd-4d67-b4e0-cd4b532d2
824,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c,PodSandboxId:c1605fc5bc9bc60a3e8e5fc21a12ed9f1a234177aa0148b5f2e68c7d80bef917,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908283264937179,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7023c0641eec2819c0f2ce8282631f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d,PodSandboxId:a3b9f9a26b0303a1d6ca603c649b023e6533c45da5cb3257426c3ee9ef75fe55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908283272931573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f8eb99a7221787feb6623d61642305,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546,PodSandboxId:e2dbf33e6761cd9cee698fc4425b48a8493b9ac8d35b7ac9ae04dc5017b2b528,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908283247735386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69e9479e6f97c36ab4818cbe06a2f90,},Annotations:map[string]string{io.kubernetes.container.hash:
c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa,PodSandboxId:8b9381980bb0356c8356984acf55315e9688845caf1855b7392faf02282fc58f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908283236046317,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-767480,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc7d9ad67417ee4369ecec880a71cbf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0596d35-68c8-4583-955b-8755eb39c101 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	971fd135577b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   18d36ab9890b0       storage-provisioner
	cf095996f65d6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   ce0853defb95f       busybox
	ad855eaecc8f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   f052f7dbfacb5       coredns-7c65d6cfc9-cs6fv
	8e7db87c8d446       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   6cfba292d641f       kube-proxy-nlwh5
	ab0a492003385       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   18d36ab9890b0       storage-provisioner
	3517cb2fe7b8b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   a3b9f9a26b030       etcd-embed-certs-767480
	4c4a9339a3c46       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   c1605fc5bc9bc       kube-scheduler-embed-certs-767480
	04347160a1b38       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   e2dbf33e6761c       kube-apiserver-embed-certs-767480
	63e4fa14d2052       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   8b9381980bb03       kube-controller-manager-embed-certs-767480
	
	
	==> coredns [ad855eaecc8f02df813f5c3839ff119f374fda0d44fb7c6036c7dc4f43fa9237] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33110 - 51145 "HINFO IN 7420970859103797635.7978935013430623811. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010466713s
	
	
	==> describe nodes <==
	Name:               embed-certs-767480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-767480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=embed-certs-767480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-767480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:26:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:25:40 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:25:40 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:25:40 +0000   Sat, 26 Oct 2024 01:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:25:40 +0000   Sat, 26 Oct 2024 02:04:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.84
	  Hostname:    embed-certs-767480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 088c65d91fef4086a939fa18be13c3d9
	  System UUID:                088c65d9-1fef-4086-a939-fa18be13c3d9
	  Boot ID:                    a50253e4-a196-4804-81f1-b0e701b06ad4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-cs6fv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-767480                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-767480             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-767480    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-nlwh5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-767480             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-c9cwx               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-767480 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-767480 event: Registered Node embed-certs-767480 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-767480 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-767480 event: Registered Node embed-certs-767480 in Controller
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050747] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.766135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.847613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.530299] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.316030] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.057919] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061584] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.204845] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.131774] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.267443] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +3.942041] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +2.074432] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +0.061387] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.502596] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.445492] systemd-fstab-generator[1540]: Ignoring "noauto" option for root device
	[  +5.201315] kauditd_printk_skb: 82 callbacks suppressed
	[Oct26 02:05] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [3517cb2fe7b8b3f4a159045d8a2413cc0736cfb6aba637e53d48e210ce0e9f4d] <==
	{"level":"warn","ts":"2024-10-26T02:20:37.133294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"531.732129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:20:37.133635Z","caller":"traceutil/trace.go:171","msg":"trace[1420106570] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:1381; }","duration":"532.154947ms","start":"2024-10-26T02:20:36.601473Z","end":"2024-10-26T02:20:37.133628Z","steps":["trace[1420106570] 'agreement among raft nodes before linearized reading'  (duration: 531.699177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:20:37.133683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:36.601432Z","time spent":"532.240857ms","remote":"127.0.0.1:55030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":29,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true "}
	{"level":"warn","ts":"2024-10-26T02:20:37.502147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.051018ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:20:37.502375Z","caller":"traceutil/trace.go:171","msg":"trace[729135368] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1381; }","duration":"223.30692ms","start":"2024-10-26T02:20:37.279049Z","end":"2024-10-26T02:20:37.502356Z","steps":["trace[729135368] 'range keys from in-memory index tree'  (duration: 223.04028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:20:37.502731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.754161ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744663890760899206 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1379 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-26T02:20:37.502839Z","caller":"traceutil/trace.go:171","msg":"trace[1872139879] linearizableReadLoop","detail":"{readStateIndex:1612; appliedIndex:1611; }","duration":"268.470555ms","start":"2024-10-26T02:20:37.234359Z","end":"2024-10-26T02:20:37.502829Z","steps":["trace[1872139879] 'read index received'  (duration: 12.481091ms)","trace[1872139879] 'applied index is now lower than readState.Index'  (duration: 255.988528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:20:37.503013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.647294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:20:37.503066Z","caller":"traceutil/trace.go:171","msg":"trace[1107362154] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1382; }","duration":"268.702237ms","start":"2024-10-26T02:20:37.234355Z","end":"2024-10-26T02:20:37.503057Z","steps":["trace[1107362154] 'agreement among raft nodes before linearized reading'  (duration: 268.626488ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:20:37.503219Z","caller":"traceutil/trace.go:171","msg":"trace[1385405340] transaction","detail":"{read_only:false; response_revision:1382; number_of_response:1; }","duration":"366.300409ms","start":"2024-10-26T02:20:37.136909Z","end":"2024-10-26T02:20:37.503209Z","steps":["trace[1385405340] 'process raft request'  (duration: 109.97946ms)","trace[1385405340] 'compare'  (duration: 255.143466ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:20:37.503308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:37.136894Z","time spent":"366.373779ms","remote":"127.0.0.1:54802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1379 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-26T02:24:45.511162Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1339}
	{"level":"info","ts":"2024-10-26T02:24:45.515078Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1339,"took":"3.282959ms","hash":2324580315,"current-db-size-bytes":2842624,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1662976,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-26T02:24:45.515183Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2324580315,"revision":1339,"compact-revision":1095}
	{"level":"warn","ts":"2024-10-26T02:25:13.200297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.279182ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744663890760900858 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1602 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-26T02:25:13.200484Z","caller":"traceutil/trace.go:171","msg":"trace[1379988901] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"355.069574ms","start":"2024-10-26T02:25:12.845379Z","end":"2024-10-26T02:25:13.200448Z","steps":["trace[1379988901] 'process raft request'  (duration: 250.467271ms)","trace[1379988901] 'compare'  (duration: 104.205581ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:25:13.200608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:25:12.845357Z","time spent":"355.213529ms","remote":"127.0.0.1:54802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1602 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-26T02:25:13.891182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.064712ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744663890760900864 > lease_revoke:<id:6b7a92c69199209f>","response":"size:29"}
	{"level":"warn","ts":"2024-10-26T02:25:48.743018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.390038ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7744663890760901077 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.84\" mod_revision:1624 > success:<request_put:<key:\"/registry/masterleases/192.168.61.84\" value_size:66 lease:7744663890760901074 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.84\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-26T02:25:48.743406Z","caller":"traceutil/trace.go:171","msg":"trace[1346150850] linearizableReadLoop","detail":"{readStateIndex:1928; appliedIndex:1927; }","duration":"336.776234ms","start":"2024-10-26T02:25:48.406614Z","end":"2024-10-26T02:25:48.743390Z","steps":["trace[1346150850] 'read index received'  (duration: 224.002838ms)","trace[1346150850] 'applied index is now lower than readState.Index'  (duration: 112.771851ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-26T02:25:48.743473Z","caller":"traceutil/trace.go:171","msg":"trace[1857436166] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"420.099961ms","start":"2024-10-26T02:25:48.323344Z","end":"2024-10-26T02:25:48.743444Z","steps":["trace[1857436166] 'process raft request'  (duration: 307.142595ms)","trace[1857436166] 'compare'  (duration: 112.301751ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:25:48.743775Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:25:48.323303Z","time spent":"420.404582ms","remote":"127.0.0.1:54666","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.84\" mod_revision:1624 > success:<request_put:<key:\"/registry/masterleases/192.168.61.84\" value_size:66 lease:7744663890760901074 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.84\" > >"}
	{"level":"warn","ts":"2024-10-26T02:25:48.743640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.014944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:25:48.743943Z","caller":"traceutil/trace.go:171","msg":"trace[1636160034] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1633; }","duration":"337.326887ms","start":"2024-10-26T02:25:48.406606Z","end":"2024-10-26T02:25:48.743933Z","steps":["trace[1636160034] 'agreement among raft nodes before linearized reading'  (duration: 336.963987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:25:48.743986Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:25:48.406564Z","time spent":"337.410957ms","remote":"127.0.0.1:54822","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	
	
	==> kernel <==
	 02:26:21 up 22 min,  0 users,  load average: 0.24, 0.15, 0.12
	Linux embed-certs-767480 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04347160a1b3851e638cdb49d4ca2105543a0e2bb48cdc0ef3ee5bbce3069546] <==
	I1026 02:22:47.671662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:22:47.671712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:24:46.669972       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:24:46.670481       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:24:47.672554       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 02:24:47.672631       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:24:47.672720       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1026 02:24:47.672850       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:24:47.673979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:24:47.674033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:25:47.674767       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:25:47.674919       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:25:47.675123       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:25:47.675259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:25:47.676079       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:25:47.677233       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [63e4fa14d2052f0c7190556fb1e9ef402a315390c178dfe68ba068ee772bc4aa] <==
	E1026 02:21:20.307390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:21:20.706891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="192.982µs"
	I1026 02:21:20.875733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:21:32.701022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="124.413µs"
	E1026 02:21:50.313338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:21:50.882922       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:22:20.319378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:22:20.890699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:22:50.325889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:22:50.900310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:23:20.332134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:23:20.908883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:23:50.339348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:23:50.916221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:24:20.345269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:24:20.925648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:24:50.352306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:24:50.946386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:25:20.359930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:25:20.954915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:25:40.083912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-767480"
	E1026 02:25:50.366260       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:25:50.964212       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:26:20.371888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:26:20.973546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8e7db87c8d4460516a71d5b17c9415adf625ad158b15a23029aa1f2006dc755b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:04:48.333001       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:04:48.341591       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.84"]
	E1026 02:04:48.341671       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:04:48.372094       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:04:48.372130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:04:48.372157       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:04:48.374241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:04:48.374590       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:04:48.374614       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:04:48.376026       1 config.go:199] "Starting service config controller"
	I1026 02:04:48.376058       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:04:48.376088       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:04:48.376104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:04:48.376632       1 config.go:328] "Starting node config controller"
	I1026 02:04:48.376657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:04:48.476572       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:04:48.476694       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:04:48.476742       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4c4a9339a3c463bd7c30a73b005cbb9a4d2bacb04383b6758327ac66e6634a4c] <==
	I1026 02:04:44.648034       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:04:46.631936       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:04:46.631970       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:04:46.632000       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:04:46.632008       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:04:46.653911       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:04:46.653945       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:04:46.656238       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:04:46.657368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:04:46.657436       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:04:46.657469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 02:04:46.758151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:25:21 embed-certs-767480 kubelet[914]: E1026 02:25:21.953234     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909521952709345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:21 embed-certs-767480 kubelet[914]: E1026 02:25:21.953305     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909521952709345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:22 embed-certs-767480 kubelet[914]: E1026 02:25:22.687373     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:25:31 embed-certs-767480 kubelet[914]: E1026 02:25:31.954970     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909531954647364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:31 embed-certs-767480 kubelet[914]: E1026 02:25:31.955373     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909531954647364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:37 embed-certs-767480 kubelet[914]: E1026 02:25:37.688159     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]: E1026 02:25:41.700675     914 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]: E1026 02:25:41.958314     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909541957773856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:41 embed-certs-767480 kubelet[914]: E1026 02:25:41.958343     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909541957773856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:49 embed-certs-767480 kubelet[914]: E1026 02:25:49.689839     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:25:51 embed-certs-767480 kubelet[914]: E1026 02:25:51.961050     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909551960223539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:51 embed-certs-767480 kubelet[914]: E1026 02:25:51.961613     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909551960223539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:26:01 embed-certs-767480 kubelet[914]: E1026 02:26:01.966719     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909561965903207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:26:01 embed-certs-767480 kubelet[914]: E1026 02:26:01.967166     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909561965903207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:26:04 embed-certs-767480 kubelet[914]: E1026 02:26:04.687282     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	Oct 26 02:26:11 embed-certs-767480 kubelet[914]: E1026 02:26:11.968476     914 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909571968155029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:26:11 embed-certs-767480 kubelet[914]: E1026 02:26:11.968553     914 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909571968155029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:26:16 embed-certs-767480 kubelet[914]: E1026 02:26:16.700440     914 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 26 02:26:16 embed-certs-767480 kubelet[914]: E1026 02:26:16.700534     914 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 26 02:26:16 embed-certs-767480 kubelet[914]: E1026 02:26:16.700749     914 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-c9cwx_kube-system(62a837f0-6fdb-418e-a5dd-e3196bb51346): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 26 02:26:16 embed-certs-767480 kubelet[914]: E1026 02:26:16.702278     914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-c9cwx" podUID="62a837f0-6fdb-418e-a5dd-e3196bb51346"
	
	
	==> storage-provisioner [971fd135577b80d14d1e59efb986386e5ce97ff401fc93254eaa591b92e2ef37] <==
	I1026 02:05:18.973568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:05:18.988710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:05:18.988849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:05:36.390392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:05:36.390721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4!
	I1026 02:05:36.393542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43cf9c3f-47ec-401d-97dc-2583e1748a16", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4 became leader
	I1026 02:05:36.492327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-767480_4f4a0de3-cf93-4192-8714-e9960db385e4!
	
	
	==> storage-provisioner [ab0a492003385b9f2fbd6d3993bd3710fcea69461c74f5ed194b99ae2d3d7f72] <==
	I1026 02:04:48.177991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:05:18.181887       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767480 -n embed-certs-767480
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-767480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-c9cwx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx: exit status 1 (69.898107ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-c9cwx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-767480 describe pod metrics-server-6867b74b74-c9cwx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (485.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (369.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093148 -n no-preload-093148
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:25:09.514079184 +0000 UTC m=+6124.281842353
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-093148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-093148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.692µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-093148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-093148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-093148 logs -n 25: (1.358418606s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC | 26 Oct 24 02:25 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:24 UTC | 26 Oct 24 02:24 UTC |
	| start   | -p newest-cni-274222 --memory=2200 --alsologtostderr   | newest-cni-274222            | jenkins | v1.34.0 | 26 Oct 24 02:24 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:24:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:24:40.702269   70088 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:24:40.702361   70088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:24:40.702369   70088 out.go:358] Setting ErrFile to fd 2...
	I1026 02:24:40.702372   70088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:24:40.702537   70088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:24:40.703105   70088 out.go:352] Setting JSON to false
	I1026 02:24:40.704041   70088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7621,"bootTime":1729901860,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:24:40.704148   70088 start.go:139] virtualization: kvm guest
	I1026 02:24:40.706619   70088 out.go:177] * [newest-cni-274222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:24:40.708465   70088 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:24:40.708471   70088 notify.go:220] Checking for updates...
	I1026 02:24:40.709933   70088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:24:40.711121   70088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:24:40.712387   70088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:24:40.713790   70088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:24:40.714981   70088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:24:40.716693   70088 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:24:40.716833   70088 config.go:182] Loaded profile config "embed-certs-767480": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:24:40.716975   70088 config.go:182] Loaded profile config "no-preload-093148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:24:40.717084   70088 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:24:40.754093   70088 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 02:24:40.755652   70088 start.go:297] selected driver: kvm2
	I1026 02:24:40.755671   70088 start.go:901] validating driver "kvm2" against <nil>
	I1026 02:24:40.755686   70088 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:24:40.756728   70088 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:24:40.756861   70088 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:24:40.773168   70088 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:24:40.773231   70088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1026 02:24:40.773293   70088 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1026 02:24:40.773574   70088 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 02:24:40.773607   70088 cni.go:84] Creating CNI manager for ""
	I1026 02:24:40.773662   70088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:24:40.773670   70088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 02:24:40.773714   70088 start.go:340] cluster config:
	{Name:newest-cni-274222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-274222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:24:40.773807   70088 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:24:40.775831   70088 out.go:177] * Starting "newest-cni-274222" primary control-plane node in "newest-cni-274222" cluster
	I1026 02:24:40.776998   70088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:24:40.777033   70088 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:24:40.777039   70088 cache.go:56] Caching tarball of preloaded images
	I1026 02:24:40.777102   70088 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:24:40.777113   70088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:24:40.777198   70088 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/newest-cni-274222/config.json ...
	I1026 02:24:40.777214   70088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/newest-cni-274222/config.json: {Name:mkb78e78bfaf6e1b5d495bdddcf52010431084c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:24:40.777333   70088 start.go:360] acquireMachinesLock for newest-cni-274222: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:24:40.777361   70088 start.go:364] duration metric: took 16.783µs to acquireMachinesLock for "newest-cni-274222"
	I1026 02:24:40.777377   70088 start.go:93] Provisioning new machine with config: &{Name:newest-cni-274222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-274222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:24:40.777460   70088 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 02:24:39.213298   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:41.713077   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:40.778954   70088 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1026 02:24:40.779096   70088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:24:40.779137   70088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:24:40.793628   70088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I1026 02:24:40.794034   70088 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:24:40.794676   70088 main.go:141] libmachine: Using API Version  1
	I1026 02:24:40.794703   70088 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:24:40.795095   70088 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:24:40.795284   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetMachineName
	I1026 02:24:40.795442   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:24:40.795622   70088 start.go:159] libmachine.API.Create for "newest-cni-274222" (driver="kvm2")
	I1026 02:24:40.795649   70088 client.go:168] LocalClient.Create starting
	I1026 02:24:40.795680   70088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 02:24:40.795726   70088 main.go:141] libmachine: Decoding PEM data...
	I1026 02:24:40.795747   70088 main.go:141] libmachine: Parsing certificate...
	I1026 02:24:40.795814   70088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 02:24:40.795838   70088 main.go:141] libmachine: Decoding PEM data...
	I1026 02:24:40.795857   70088 main.go:141] libmachine: Parsing certificate...
	I1026 02:24:40.795882   70088 main.go:141] libmachine: Running pre-create checks...
	I1026 02:24:40.795899   70088 main.go:141] libmachine: (newest-cni-274222) Calling .PreCreateCheck
	I1026 02:24:40.796251   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetConfigRaw
	I1026 02:24:40.796670   70088 main.go:141] libmachine: Creating machine...
	I1026 02:24:40.796687   70088 main.go:141] libmachine: (newest-cni-274222) Calling .Create
	I1026 02:24:40.796824   70088 main.go:141] libmachine: (newest-cni-274222) Creating KVM machine...
	I1026 02:24:40.798114   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found existing default KVM network
	I1026 02:24:40.799742   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:40.799559   70112 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026a190}
	I1026 02:24:40.799768   70088 main.go:141] libmachine: (newest-cni-274222) DBG | created network xml: 
	I1026 02:24:40.799778   70088 main.go:141] libmachine: (newest-cni-274222) DBG | <network>
	I1026 02:24:40.799787   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   <name>mk-newest-cni-274222</name>
	I1026 02:24:40.799796   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   <dns enable='no'/>
	I1026 02:24:40.799802   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   
	I1026 02:24:40.799811   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1026 02:24:40.799818   70088 main.go:141] libmachine: (newest-cni-274222) DBG |     <dhcp>
	I1026 02:24:40.799828   70088 main.go:141] libmachine: (newest-cni-274222) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1026 02:24:40.799835   70088 main.go:141] libmachine: (newest-cni-274222) DBG |     </dhcp>
	I1026 02:24:40.799844   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   </ip>
	I1026 02:24:40.799857   70088 main.go:141] libmachine: (newest-cni-274222) DBG |   
	I1026 02:24:40.799869   70088 main.go:141] libmachine: (newest-cni-274222) DBG | </network>
	I1026 02:24:40.799883   70088 main.go:141] libmachine: (newest-cni-274222) DBG | 
	I1026 02:24:40.805283   70088 main.go:141] libmachine: (newest-cni-274222) DBG | trying to create private KVM network mk-newest-cni-274222 192.168.39.0/24...
	I1026 02:24:40.875558   70088 main.go:141] libmachine: (newest-cni-274222) DBG | private KVM network mk-newest-cni-274222 192.168.39.0/24 created
	I1026 02:24:40.875589   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:40.875537   70112 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:24:40.875603   70088 main.go:141] libmachine: (newest-cni-274222) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222 ...
	I1026 02:24:40.875626   70088 main.go:141] libmachine: (newest-cni-274222) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 02:24:40.875642   70088 main.go:141] libmachine: (newest-cni-274222) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 02:24:41.126395   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:41.126260   70112 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa...
	I1026 02:24:41.365058   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:41.364942   70112 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/newest-cni-274222.rawdisk...
	I1026 02:24:41.365088   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Writing magic tar header
	I1026 02:24:41.365106   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Writing SSH key tar header
	I1026 02:24:41.365124   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:41.365073   70112 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222 ...
	I1026 02:24:41.365301   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222
	I1026 02:24:41.365338   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222 (perms=drwx------)
	I1026 02:24:41.365349   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 02:24:41.365363   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 02:24:41.365379   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 02:24:41.365391   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 02:24:41.365405   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 02:24:41.365413   70088 main.go:141] libmachine: (newest-cni-274222) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 02:24:41.365437   70088 main.go:141] libmachine: (newest-cni-274222) Creating domain...
	I1026 02:24:41.365450   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:24:41.365478   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 02:24:41.365499   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 02:24:41.365524   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home/jenkins
	I1026 02:24:41.365537   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Checking permissions on dir: /home
	I1026 02:24:41.365549   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Skipping /home - not owner
	I1026 02:24:41.366598   70088 main.go:141] libmachine: (newest-cni-274222) define libvirt domain using xml: 
	I1026 02:24:41.366620   70088 main.go:141] libmachine: (newest-cni-274222) <domain type='kvm'>
	I1026 02:24:41.366629   70088 main.go:141] libmachine: (newest-cni-274222)   <name>newest-cni-274222</name>
	I1026 02:24:41.366640   70088 main.go:141] libmachine: (newest-cni-274222)   <memory unit='MiB'>2200</memory>
	I1026 02:24:41.366663   70088 main.go:141] libmachine: (newest-cni-274222)   <vcpu>2</vcpu>
	I1026 02:24:41.366679   70088 main.go:141] libmachine: (newest-cni-274222)   <features>
	I1026 02:24:41.366698   70088 main.go:141] libmachine: (newest-cni-274222)     <acpi/>
	I1026 02:24:41.366716   70088 main.go:141] libmachine: (newest-cni-274222)     <apic/>
	I1026 02:24:41.366729   70088 main.go:141] libmachine: (newest-cni-274222)     <pae/>
	I1026 02:24:41.366747   70088 main.go:141] libmachine: (newest-cni-274222)     
	I1026 02:24:41.366760   70088 main.go:141] libmachine: (newest-cni-274222)   </features>
	I1026 02:24:41.366770   70088 main.go:141] libmachine: (newest-cni-274222)   <cpu mode='host-passthrough'>
	I1026 02:24:41.366777   70088 main.go:141] libmachine: (newest-cni-274222)   
	I1026 02:24:41.366787   70088 main.go:141] libmachine: (newest-cni-274222)   </cpu>
	I1026 02:24:41.366811   70088 main.go:141] libmachine: (newest-cni-274222)   <os>
	I1026 02:24:41.366828   70088 main.go:141] libmachine: (newest-cni-274222)     <type>hvm</type>
	I1026 02:24:41.366840   70088 main.go:141] libmachine: (newest-cni-274222)     <boot dev='cdrom'/>
	I1026 02:24:41.366852   70088 main.go:141] libmachine: (newest-cni-274222)     <boot dev='hd'/>
	I1026 02:24:41.366866   70088 main.go:141] libmachine: (newest-cni-274222)     <bootmenu enable='no'/>
	I1026 02:24:41.366881   70088 main.go:141] libmachine: (newest-cni-274222)   </os>
	I1026 02:24:41.366892   70088 main.go:141] libmachine: (newest-cni-274222)   <devices>
	I1026 02:24:41.366902   70088 main.go:141] libmachine: (newest-cni-274222)     <disk type='file' device='cdrom'>
	I1026 02:24:41.366918   70088 main.go:141] libmachine: (newest-cni-274222)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/boot2docker.iso'/>
	I1026 02:24:41.366929   70088 main.go:141] libmachine: (newest-cni-274222)       <target dev='hdc' bus='scsi'/>
	I1026 02:24:41.366938   70088 main.go:141] libmachine: (newest-cni-274222)       <readonly/>
	I1026 02:24:41.366951   70088 main.go:141] libmachine: (newest-cni-274222)     </disk>
	I1026 02:24:41.366964   70088 main.go:141] libmachine: (newest-cni-274222)     <disk type='file' device='disk'>
	I1026 02:24:41.366976   70088 main.go:141] libmachine: (newest-cni-274222)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 02:24:41.366992   70088 main.go:141] libmachine: (newest-cni-274222)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/newest-cni-274222.rawdisk'/>
	I1026 02:24:41.367002   70088 main.go:141] libmachine: (newest-cni-274222)       <target dev='hda' bus='virtio'/>
	I1026 02:24:41.367012   70088 main.go:141] libmachine: (newest-cni-274222)     </disk>
	I1026 02:24:41.367025   70088 main.go:141] libmachine: (newest-cni-274222)     <interface type='network'>
	I1026 02:24:41.367038   70088 main.go:141] libmachine: (newest-cni-274222)       <source network='mk-newest-cni-274222'/>
	I1026 02:24:41.367052   70088 main.go:141] libmachine: (newest-cni-274222)       <model type='virtio'/>
	I1026 02:24:41.367063   70088 main.go:141] libmachine: (newest-cni-274222)     </interface>
	I1026 02:24:41.367076   70088 main.go:141] libmachine: (newest-cni-274222)     <interface type='network'>
	I1026 02:24:41.367088   70088 main.go:141] libmachine: (newest-cni-274222)       <source network='default'/>
	I1026 02:24:41.367101   70088 main.go:141] libmachine: (newest-cni-274222)       <model type='virtio'/>
	I1026 02:24:41.367113   70088 main.go:141] libmachine: (newest-cni-274222)     </interface>
	I1026 02:24:41.367123   70088 main.go:141] libmachine: (newest-cni-274222)     <serial type='pty'>
	I1026 02:24:41.367132   70088 main.go:141] libmachine: (newest-cni-274222)       <target port='0'/>
	I1026 02:24:41.367142   70088 main.go:141] libmachine: (newest-cni-274222)     </serial>
	I1026 02:24:41.367149   70088 main.go:141] libmachine: (newest-cni-274222)     <console type='pty'>
	I1026 02:24:41.367159   70088 main.go:141] libmachine: (newest-cni-274222)       <target type='serial' port='0'/>
	I1026 02:24:41.367171   70088 main.go:141] libmachine: (newest-cni-274222)     </console>
	I1026 02:24:41.367183   70088 main.go:141] libmachine: (newest-cni-274222)     <rng model='virtio'>
	I1026 02:24:41.367193   70088 main.go:141] libmachine: (newest-cni-274222)       <backend model='random'>/dev/random</backend>
	I1026 02:24:41.367198   70088 main.go:141] libmachine: (newest-cni-274222)     </rng>
	I1026 02:24:41.367206   70088 main.go:141] libmachine: (newest-cni-274222)     
	I1026 02:24:41.367215   70088 main.go:141] libmachine: (newest-cni-274222)     
	I1026 02:24:41.367224   70088 main.go:141] libmachine: (newest-cni-274222)   </devices>
	I1026 02:24:41.367234   70088 main.go:141] libmachine: (newest-cni-274222) </domain>
	I1026 02:24:41.367245   70088 main.go:141] libmachine: (newest-cni-274222) 
	I1026 02:24:41.371472   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:23:9a:21 in network default
	I1026 02:24:41.372077   70088 main.go:141] libmachine: (newest-cni-274222) Ensuring networks are active...
	I1026 02:24:41.372096   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:41.372782   70088 main.go:141] libmachine: (newest-cni-274222) Ensuring network default is active
	I1026 02:24:41.373126   70088 main.go:141] libmachine: (newest-cni-274222) Ensuring network mk-newest-cni-274222 is active
	I1026 02:24:41.373630   70088 main.go:141] libmachine: (newest-cni-274222) Getting domain xml...
	I1026 02:24:41.374442   70088 main.go:141] libmachine: (newest-cni-274222) Creating domain...
	I1026 02:24:42.628224   70088 main.go:141] libmachine: (newest-cni-274222) Waiting to get IP...
	I1026 02:24:42.628977   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:42.629397   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:42.629459   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:42.629394   70112 retry.go:31] will retry after 302.492967ms: waiting for machine to come up
	I1026 02:24:42.934013   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:42.934496   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:42.934522   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:42.934456   70112 retry.go:31] will retry after 301.561623ms: waiting for machine to come up
	I1026 02:24:43.237900   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:43.238420   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:43.238457   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:43.238378   70112 retry.go:31] will retry after 465.245583ms: waiting for machine to come up
	I1026 02:24:43.704860   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:43.705441   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:43.705470   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:43.705343   70112 retry.go:31] will retry after 602.363551ms: waiting for machine to come up
	I1026 02:24:44.309150   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:44.309669   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:44.309697   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:44.309628   70112 retry.go:31] will retry after 614.6742ms: waiting for machine to come up
	I1026 02:24:44.925315   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:44.925798   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:44.925825   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:44.925752   70112 retry.go:31] will retry after 690.279491ms: waiting for machine to come up
	I1026 02:24:45.617618   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:45.618089   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:45.618166   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:45.618044   70112 retry.go:31] will retry after 746.524271ms: waiting for machine to come up
	I1026 02:24:44.212348   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:46.712108   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:46.366612   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:46.367154   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:46.367177   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:46.367100   70112 retry.go:31] will retry after 1.426413534s: waiting for machine to come up
	I1026 02:24:47.795067   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:47.795602   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:47.795629   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:47.795566   70112 retry.go:31] will retry after 1.700748234s: waiting for machine to come up
	I1026 02:24:49.498396   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:49.498854   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:49.498883   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:49.498798   70112 retry.go:31] will retry after 2.25661053s: waiting for machine to come up
	I1026 02:24:48.712906   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:50.713123   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:51.757148   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:51.757658   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:51.757682   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:51.757610   70112 retry.go:31] will retry after 2.518718297s: waiting for machine to come up
	I1026 02:24:54.279248   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:54.279723   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:54.279749   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:54.279663   70112 retry.go:31] will retry after 2.510968998s: waiting for machine to come up
	I1026 02:24:53.211877   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:54.712269   67066 pod_ready.go:82] duration metric: took 4m0.006730297s for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	E1026 02:24:54.712297   67066 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1026 02:24:54.712304   67066 pod_ready.go:39] duration metric: took 4m3.606758001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:24:54.712318   67066 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:24:54.712345   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:24:54.712390   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:24:54.754945   67066 cri.go:89] found id: "c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:24:54.754968   67066 cri.go:89] found id: ""
	I1026 02:24:54.754975   67066 logs.go:282] 1 containers: [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e]
	I1026 02:24:54.755020   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.758797   67066 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:24:54.758855   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:24:54.796456   67066 cri.go:89] found id: "b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:24:54.796478   67066 cri.go:89] found id: ""
	I1026 02:24:54.796485   67066 logs.go:282] 1 containers: [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72]
	I1026 02:24:54.796537   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.802408   67066 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:24:54.802478   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:24:54.844960   67066 cri.go:89] found id: "e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:24:54.844982   67066 cri.go:89] found id: ""
	I1026 02:24:54.844991   67066 logs.go:282] 1 containers: [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416]
	I1026 02:24:54.845047   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.848857   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:24:54.848922   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:24:54.881608   67066 cri.go:89] found id: "c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:24:54.881628   67066 cri.go:89] found id: ""
	I1026 02:24:54.881636   67066 logs.go:282] 1 containers: [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55]
	I1026 02:24:54.881699   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.885362   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:24:54.885435   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:24:54.921024   67066 cri.go:89] found id: "da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:24:54.921051   67066 cri.go:89] found id: ""
	I1026 02:24:54.921060   67066 logs.go:282] 1 containers: [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a]
	I1026 02:24:54.921115   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.925055   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:24:54.925129   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:24:54.957985   67066 cri.go:89] found id: "a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:24:54.958013   67066 cri.go:89] found id: ""
	I1026 02:24:54.958023   67066 logs.go:282] 1 containers: [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8]
	I1026 02:24:54.958084   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:54.962126   67066 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:24:54.962190   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:24:54.999866   67066 cri.go:89] found id: ""
	I1026 02:24:54.999889   67066 logs.go:282] 0 containers: []
	W1026 02:24:54.999897   67066 logs.go:284] No container was found matching "kindnet"
	I1026 02:24:54.999902   67066 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:24:54.999959   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:24:55.035604   67066 cri.go:89] found id: "5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:24:55.035626   67066 cri.go:89] found id: "17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:24:55.035630   67066 cri.go:89] found id: ""
	I1026 02:24:55.035637   67066 logs.go:282] 2 containers: [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d]
	I1026 02:24:55.035691   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:55.039467   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:55.042961   67066 logs.go:123] Gathering logs for kube-scheduler [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55] ...
	I1026 02:24:55.042984   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:24:55.076263   67066 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:24:55.076293   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:24:55.617176   67066 logs.go:123] Gathering logs for container status ...
	I1026 02:24:55.617214   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:24:55.656479   67066 logs.go:123] Gathering logs for kubelet ...
	I1026 02:24:55.656505   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:24:55.723411   67066 logs.go:123] Gathering logs for kube-apiserver [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e] ...
	I1026 02:24:55.723447   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:24:55.776962   67066 logs.go:123] Gathering logs for etcd [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72] ...
	I1026 02:24:55.776996   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:24:55.818971   67066 logs.go:123] Gathering logs for coredns [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416] ...
	I1026 02:24:55.819006   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:24:55.852965   67066 logs.go:123] Gathering logs for kube-proxy [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a] ...
	I1026 02:24:55.852997   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:24:55.887705   67066 logs.go:123] Gathering logs for kube-controller-manager [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8] ...
	I1026 02:24:55.887733   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:24:55.940984   67066 logs.go:123] Gathering logs for storage-provisioner [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723] ...
	I1026 02:24:55.941015   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:24:55.974692   67066 logs.go:123] Gathering logs for storage-provisioner [17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d] ...
	I1026 02:24:55.974737   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:24:56.012851   67066 logs.go:123] Gathering logs for dmesg ...
	I1026 02:24:56.012880   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:24:56.026202   67066 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:24:56.026234   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:24:56.792749   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:24:56.793205   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find current IP address of domain newest-cni-274222 in network mk-newest-cni-274222
	I1026 02:24:56.793248   70088 main.go:141] libmachine: (newest-cni-274222) DBG | I1026 02:24:56.793171   70112 retry.go:31] will retry after 4.455457804s: waiting for machine to come up
	I1026 02:24:58.647803   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:24:58.662661   67066 api_server.go:72] duration metric: took 4m14.820197479s to wait for apiserver process to appear ...
	I1026 02:24:58.662688   67066 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:24:58.662723   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:24:58.662774   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:24:58.695464   67066 cri.go:89] found id: "c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:24:58.695489   67066 cri.go:89] found id: ""
	I1026 02:24:58.695499   67066 logs.go:282] 1 containers: [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e]
	I1026 02:24:58.695560   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.699135   67066 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:24:58.699195   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:24:58.740625   67066 cri.go:89] found id: "b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:24:58.740644   67066 cri.go:89] found id: ""
	I1026 02:24:58.740652   67066 logs.go:282] 1 containers: [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72]
	I1026 02:24:58.740699   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.744382   67066 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:24:58.744436   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:24:58.776051   67066 cri.go:89] found id: "e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:24:58.776071   67066 cri.go:89] found id: ""
	I1026 02:24:58.776078   67066 logs.go:282] 1 containers: [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416]
	I1026 02:24:58.776124   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.779927   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:24:58.779997   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:24:58.821035   67066 cri.go:89] found id: "c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:24:58.821059   67066 cri.go:89] found id: ""
	I1026 02:24:58.821068   67066 logs.go:282] 1 containers: [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55]
	I1026 02:24:58.821124   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.824797   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:24:58.824857   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:24:58.857052   67066 cri.go:89] found id: "da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:24:58.857071   67066 cri.go:89] found id: ""
	I1026 02:24:58.857078   67066 logs.go:282] 1 containers: [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a]
	I1026 02:24:58.857122   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.861472   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:24:58.861543   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:24:58.895267   67066 cri.go:89] found id: "a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:24:58.895290   67066 cri.go:89] found id: ""
	I1026 02:24:58.895297   67066 logs.go:282] 1 containers: [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8]
	I1026 02:24:58.895345   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.899372   67066 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:24:58.899444   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:24:58.939455   67066 cri.go:89] found id: ""
	I1026 02:24:58.939488   67066 logs.go:282] 0 containers: []
	W1026 02:24:58.939497   67066 logs.go:284] No container was found matching "kindnet"
	I1026 02:24:58.939503   67066 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:24:58.939563   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:24:58.975077   67066 cri.go:89] found id: "5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:24:58.975102   67066 cri.go:89] found id: "17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:24:58.975108   67066 cri.go:89] found id: ""
	I1026 02:24:58.975116   67066 logs.go:282] 2 containers: [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d]
	I1026 02:24:58.975185   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.979315   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:24:58.982962   67066 logs.go:123] Gathering logs for kube-proxy [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a] ...
	I1026 02:24:58.982984   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:24:59.024553   67066 logs.go:123] Gathering logs for storage-provisioner [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723] ...
	I1026 02:24:59.024578   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:24:59.058115   67066 logs.go:123] Gathering logs for container status ...
	I1026 02:24:59.058140   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:24:59.099968   67066 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:24:59.099994   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:24:59.205787   67066 logs.go:123] Gathering logs for dmesg ...
	I1026 02:24:59.205818   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:24:59.218540   67066 logs.go:123] Gathering logs for kube-apiserver [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e] ...
	I1026 02:24:59.218566   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:24:59.261869   67066 logs.go:123] Gathering logs for etcd [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72] ...
	I1026 02:24:59.261902   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:24:59.301011   67066 logs.go:123] Gathering logs for coredns [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416] ...
	I1026 02:24:59.301047   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:24:59.335455   67066 logs.go:123] Gathering logs for kube-scheduler [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55] ...
	I1026 02:24:59.335489   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:24:59.370008   67066 logs.go:123] Gathering logs for kube-controller-manager [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8] ...
	I1026 02:24:59.370036   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:24:59.419360   67066 logs.go:123] Gathering logs for storage-provisioner [17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d] ...
	I1026 02:24:59.419400   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:24:59.454574   67066 logs.go:123] Gathering logs for kubelet ...
	I1026 02:24:59.454598   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:24:59.521750   67066 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:24:59.521786   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:25:01.249887   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:01.250363   70088 main.go:141] libmachine: (newest-cni-274222) Found IP for machine: 192.168.39.202
	I1026 02:25:01.250386   70088 main.go:141] libmachine: (newest-cni-274222) Reserving static IP address...
	I1026 02:25:01.250415   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:01.250713   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find host DHCP lease matching {name: "newest-cni-274222", mac: "52:54:00:d3:6d:d4", ip: "192.168.39.202"} in network mk-newest-cni-274222
	I1026 02:25:01.337377   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Getting to WaitForSSH function...
	I1026 02:25:01.337409   70088 main.go:141] libmachine: (newest-cni-274222) Reserved static IP address: 192.168.39.202
	I1026 02:25:01.337478   70088 main.go:141] libmachine: (newest-cni-274222) Waiting for SSH to be available...
	I1026 02:25:01.340245   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:01.340615   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222
	I1026 02:25:01.340641   70088 main.go:141] libmachine: (newest-cni-274222) DBG | unable to find defined IP address of network mk-newest-cni-274222 interface with MAC address 52:54:00:d3:6d:d4
	I1026 02:25:01.340655   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Using SSH client type: external
	I1026 02:25:01.340708   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa (-rw-------)
	I1026 02:25:01.340781   70088 main.go:141] libmachine: (newest-cni-274222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:25:01.340805   70088 main.go:141] libmachine: (newest-cni-274222) DBG | About to run SSH command:
	I1026 02:25:01.340821   70088 main.go:141] libmachine: (newest-cni-274222) DBG | exit 0
	I1026 02:25:01.344929   70088 main.go:141] libmachine: (newest-cni-274222) DBG | SSH cmd err, output: exit status 255: 
	I1026 02:25:01.344948   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1026 02:25:01.344957   70088 main.go:141] libmachine: (newest-cni-274222) DBG | command : exit 0
	I1026 02:25:01.344980   70088 main.go:141] libmachine: (newest-cni-274222) DBG | err     : exit status 255
	I1026 02:25:01.344996   70088 main.go:141] libmachine: (newest-cni-274222) DBG | output  : 
	I1026 02:25:04.345618   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Getting to WaitForSSH function...
	I1026 02:25:04.348436   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.349100   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.349128   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.349209   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Using SSH client type: external
	I1026 02:25:04.349236   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa (-rw-------)
	I1026 02:25:04.349272   70088 main.go:141] libmachine: (newest-cni-274222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:25:04.349290   70088 main.go:141] libmachine: (newest-cni-274222) DBG | About to run SSH command:
	I1026 02:25:04.349304   70088 main.go:141] libmachine: (newest-cni-274222) DBG | exit 0
	I1026 02:25:04.477755   70088 main.go:141] libmachine: (newest-cni-274222) DBG | SSH cmd err, output: <nil>: 
	I1026 02:25:04.478015   70088 main.go:141] libmachine: (newest-cni-274222) KVM machine creation complete!
	I1026 02:25:04.478354   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetConfigRaw
	I1026 02:25:04.478900   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:04.479119   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:04.479279   70088 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:25:04.479294   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetState
	I1026 02:25:04.480666   70088 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:25:04.480680   70088 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:25:04.480688   70088 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:25:04.480696   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:04.483059   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.483390   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.483419   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.483569   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:04.483733   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.483874   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.484013   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:04.484197   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:04.484430   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:04.484444   70088 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:25:04.596742   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:25:04.596771   70088 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:25:04.596780   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:04.599404   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.599835   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.599862   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.599974   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:04.600175   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.600328   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.600493   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:04.600652   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:04.600897   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:04.600911   70088 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:25:04.710123   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:25:04.710200   70088 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:25:04.710208   70088 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:25:04.710215   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetMachineName
	I1026 02:25:04.710439   70088 buildroot.go:166] provisioning hostname "newest-cni-274222"
	I1026 02:25:04.710463   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetMachineName
	I1026 02:25:04.710630   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:04.713231   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.713580   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.713615   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.713800   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:04.713963   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.714087   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.714226   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:04.714356   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:04.714561   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:04.714575   70088 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-274222 && echo "newest-cni-274222" | sudo tee /etc/hostname
	I1026 02:25:04.845280   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-274222
	
	I1026 02:25:04.845311   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:04.848470   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.848907   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.848940   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.849157   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:04.849347   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.849540   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:04.849714   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:04.849900   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:04.850085   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:04.850102   70088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-274222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-274222/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-274222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:25:04.970673   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:25:04.970710   70088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:25:04.970728   70088 buildroot.go:174] setting up certificates
	I1026 02:25:04.970736   70088 provision.go:84] configureAuth start
	I1026 02:25:04.970759   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetMachineName
	I1026 02:25:04.971071   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetIP
	I1026 02:25:04.973493   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.973915   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.973944   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.974051   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:04.976844   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.977202   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:04.977225   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:04.977379   70088 provision.go:143] copyHostCerts
	I1026 02:25:04.977469   70088 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:25:04.977485   70088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:25:04.977556   70088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:25:04.977641   70088 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:25:04.977650   70088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:25:04.977673   70088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:25:04.977734   70088 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:25:04.977742   70088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:25:04.977764   70088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:25:04.977807   70088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.newest-cni-274222 san=[127.0.0.1 192.168.39.202 localhost minikube newest-cni-274222]
	I1026 02:25:05.226817   70088 provision.go:177] copyRemoteCerts
	I1026 02:25:05.226871   70088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:25:05.226891   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.229567   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.229854   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.229884   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.230008   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.230186   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.230343   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.230454   70088 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa Username:docker}
	I1026 02:25:05.315946   70088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 02:25:05.339861   70088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 02:25:05.364478   70088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:25:05.390239   70088 provision.go:87] duration metric: took 419.491625ms to configureAuth
	I1026 02:25:05.390266   70088 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:25:05.390430   70088 config.go:182] Loaded profile config "newest-cni-274222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:25:05.390497   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.393245   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.393674   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.393704   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.393846   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.394058   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.394224   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.394369   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.394499   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:05.394657   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:05.394672   70088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:25:05.618679   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:25:05.618719   70088 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:25:05.618729   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetURL
	I1026 02:25:05.620106   70088 main.go:141] libmachine: (newest-cni-274222) DBG | Using libvirt version 6000000
	I1026 02:25:05.622428   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.622766   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.622799   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.622923   70088 main.go:141] libmachine: Docker is up and running!
	I1026 02:25:05.622940   70088 main.go:141] libmachine: Reticulating splines...
	I1026 02:25:05.622947   70088 client.go:171] duration metric: took 24.827291899s to LocalClient.Create
	I1026 02:25:05.622971   70088 start.go:167] duration metric: took 24.827349375s to libmachine.API.Create "newest-cni-274222"
	I1026 02:25:05.622980   70088 start.go:293] postStartSetup for "newest-cni-274222" (driver="kvm2")
	I1026 02:25:05.622989   70088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:25:05.623007   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:05.623205   70088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:25:05.623228   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.625354   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.625683   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.625712   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.625880   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.626061   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.626205   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.626314   70088 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa Username:docker}
	I1026 02:25:02.446276   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:25:02.450755   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 200:
	ok
	I1026 02:25:02.451734   67066 api_server.go:141] control plane version: v1.31.2
	I1026 02:25:02.451757   67066 api_server.go:131] duration metric: took 3.789062481s to wait for apiserver health ...
	I1026 02:25:02.451765   67066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:25:02.451788   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 02:25:02.451837   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 02:25:02.487928   67066 cri.go:89] found id: "c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:25:02.487955   67066 cri.go:89] found id: ""
	I1026 02:25:02.487964   67066 logs.go:282] 1 containers: [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e]
	I1026 02:25:02.488030   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.492646   67066 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 02:25:02.492725   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 02:25:02.528589   67066 cri.go:89] found id: "b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:25:02.528616   67066 cri.go:89] found id: ""
	I1026 02:25:02.528626   67066 logs.go:282] 1 containers: [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72]
	I1026 02:25:02.528677   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.532860   67066 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 02:25:02.532949   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 02:25:02.571120   67066 cri.go:89] found id: "e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:25:02.571142   67066 cri.go:89] found id: ""
	I1026 02:25:02.571149   67066 logs.go:282] 1 containers: [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416]
	I1026 02:25:02.571203   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.576159   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 02:25:02.576219   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 02:25:02.616986   67066 cri.go:89] found id: "c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:25:02.617006   67066 cri.go:89] found id: ""
	I1026 02:25:02.617013   67066 logs.go:282] 1 containers: [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55]
	I1026 02:25:02.617061   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.620991   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 02:25:02.621049   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 02:25:02.657404   67066 cri.go:89] found id: "da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:25:02.657450   67066 cri.go:89] found id: ""
	I1026 02:25:02.657460   67066 logs.go:282] 1 containers: [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a]
	I1026 02:25:02.657506   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.661876   67066 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 02:25:02.661959   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 02:25:02.697979   67066 cri.go:89] found id: "a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:25:02.698008   67066 cri.go:89] found id: ""
	I1026 02:25:02.698016   67066 logs.go:282] 1 containers: [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8]
	I1026 02:25:02.698069   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.702171   67066 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 02:25:02.702249   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 02:25:02.737622   67066 cri.go:89] found id: ""
	I1026 02:25:02.737649   67066 logs.go:282] 0 containers: []
	W1026 02:25:02.737657   67066 logs.go:284] No container was found matching "kindnet"
	I1026 02:25:02.737665   67066 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 02:25:02.737721   67066 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 02:25:02.776123   67066 cri.go:89] found id: "5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:25:02.776152   67066 cri.go:89] found id: "17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:25:02.776159   67066 cri.go:89] found id: ""
	I1026 02:25:02.776169   67066 logs.go:282] 2 containers: [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d]
	I1026 02:25:02.776233   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.780416   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:25:02.784157   67066 logs.go:123] Gathering logs for kube-proxy [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a] ...
	I1026 02:25:02.784189   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a"
	I1026 02:25:02.825731   67066 logs.go:123] Gathering logs for CRI-O ...
	I1026 02:25:02.825770   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 02:25:03.222666   67066 logs.go:123] Gathering logs for container status ...
	I1026 02:25:03.222709   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 02:25:03.263713   67066 logs.go:123] Gathering logs for kubelet ...
	I1026 02:25:03.263750   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 02:25:03.334027   67066 logs.go:123] Gathering logs for dmesg ...
	I1026 02:25:03.334074   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 02:25:03.352482   67066 logs.go:123] Gathering logs for etcd [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72] ...
	I1026 02:25:03.352512   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72"
	I1026 02:25:03.394489   67066 logs.go:123] Gathering logs for coredns [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416] ...
	I1026 02:25:03.394539   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416"
	I1026 02:25:03.431442   67066 logs.go:123] Gathering logs for kube-scheduler [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55] ...
	I1026 02:25:03.431471   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55"
	I1026 02:25:03.470041   67066 logs.go:123] Gathering logs for kube-controller-manager [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8] ...
	I1026 02:25:03.470067   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8"
	I1026 02:25:03.527459   67066 logs.go:123] Gathering logs for storage-provisioner [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723] ...
	I1026 02:25:03.527495   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723"
	I1026 02:25:03.562712   67066 logs.go:123] Gathering logs for storage-provisioner [17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d] ...
	I1026 02:25:03.562740   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d"
	I1026 02:25:03.597566   67066 logs.go:123] Gathering logs for describe nodes ...
	I1026 02:25:03.597591   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 02:25:03.708198   67066 logs.go:123] Gathering logs for kube-apiserver [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e] ...
	I1026 02:25:03.708231   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e"
	I1026 02:25:06.261614   67066 system_pods.go:59] 8 kube-system pods found
	I1026 02:25:06.261645   67066 system_pods.go:61] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running
	I1026 02:25:06.261651   67066 system_pods.go:61] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running
	I1026 02:25:06.261656   67066 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running
	I1026 02:25:06.261660   67066 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running
	I1026 02:25:06.261663   67066 system_pods.go:61] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:25:06.261666   67066 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running
	I1026 02:25:06.261673   67066 system_pods.go:61] "metrics-server-6867b74b74-jkl5g" [023bd779-83b7-42ef-893d-f7ab70f08ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:25:06.261677   67066 system_pods.go:61] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running
	I1026 02:25:06.261686   67066 system_pods.go:74] duration metric: took 3.809916248s to wait for pod list to return data ...
	I1026 02:25:06.261693   67066 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:25:06.264584   67066 default_sa.go:45] found service account: "default"
	I1026 02:25:06.264615   67066 default_sa.go:55] duration metric: took 2.911824ms for default service account to be created ...
	I1026 02:25:06.264629   67066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:25:06.269027   67066 system_pods.go:86] 8 kube-system pods found
	I1026 02:25:06.269055   67066 system_pods.go:89] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running
	I1026 02:25:06.269060   67066 system_pods.go:89] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running
	I1026 02:25:06.269065   67066 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running
	I1026 02:25:06.269070   67066 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running
	I1026 02:25:06.269076   67066 system_pods.go:89] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:25:06.269081   67066 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running
	I1026 02:25:06.269091   67066 system_pods.go:89] "metrics-server-6867b74b74-jkl5g" [023bd779-83b7-42ef-893d-f7ab70f08ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:25:06.269101   67066 system_pods.go:89] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running
	I1026 02:25:06.269111   67066 system_pods.go:126] duration metric: took 4.475289ms to wait for k8s-apps to be running ...
	I1026 02:25:06.269122   67066 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:25:06.269167   67066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:25:06.290991   67066 system_svc.go:56] duration metric: took 21.858355ms WaitForService to wait for kubelet
	I1026 02:25:06.291026   67066 kubeadm.go:582] duration metric: took 4m22.448564855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:25:06.291053   67066 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:25:06.294697   67066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:25:06.294745   67066 node_conditions.go:123] node cpu capacity is 2
	I1026 02:25:06.294758   67066 node_conditions.go:105] duration metric: took 3.699582ms to run NodePressure ...
	I1026 02:25:06.294774   67066 start.go:241] waiting for startup goroutines ...
	I1026 02:25:06.294784   67066 start.go:246] waiting for cluster config update ...
	I1026 02:25:06.294800   67066 start.go:255] writing updated cluster config ...
	I1026 02:25:06.295239   67066 ssh_runner.go:195] Run: rm -f paused
	I1026 02:25:06.347719   67066 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:25:06.349724   67066 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-661357" cluster and "default" namespace by default
	I1026 02:25:05.711688   70088 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:25:05.715963   70088 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:25:05.715993   70088 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:25:05.716066   70088 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:25:05.716163   70088 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:25:05.716293   70088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:25:05.726051   70088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:25:05.750903   70088 start.go:296] duration metric: took 127.909539ms for postStartSetup
	I1026 02:25:05.750963   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetConfigRaw
	I1026 02:25:05.751648   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetIP
	I1026 02:25:05.754634   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.755012   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.755071   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.755275   70088 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/newest-cni-274222/config.json ...
	I1026 02:25:05.755505   70088 start.go:128] duration metric: took 24.978033524s to createHost
	I1026 02:25:05.755528   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.758397   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.758704   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.758736   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.758860   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.759067   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.759226   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.759374   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.759523   70088 main.go:141] libmachine: Using SSH client type: native
	I1026 02:25:05.759733   70088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1026 02:25:05.759746   70088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:25:05.873929   70088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909505.852960775
	
	I1026 02:25:05.873953   70088 fix.go:216] guest clock: 1729909505.852960775
	I1026 02:25:05.873960   70088 fix.go:229] Guest: 2024-10-26 02:25:05.852960775 +0000 UTC Remote: 2024-10-26 02:25:05.755516495 +0000 UTC m=+25.092242723 (delta=97.44428ms)
	I1026 02:25:05.873978   70088 fix.go:200] guest clock delta is within tolerance: 97.44428ms
	I1026 02:25:05.873983   70088 start.go:83] releasing machines lock for "newest-cni-274222", held for 25.09661334s
	I1026 02:25:05.873999   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:05.874269   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetIP
	I1026 02:25:05.876784   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.877274   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.877302   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.877501   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:05.878022   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:05.878167   70088 main.go:141] libmachine: (newest-cni-274222) Calling .DriverName
	I1026 02:25:05.878228   70088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:25:05.878273   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.878389   70088 ssh_runner.go:195] Run: cat /version.json
	I1026 02:25:05.878409   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHHostname
	I1026 02:25:05.880648   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.881086   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.881116   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.881135   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.881226   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.881390   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.881544   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.881610   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:05.881638   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:05.881674   70088 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa Username:docker}
	I1026 02:25:05.881845   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHPort
	I1026 02:25:05.881986   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHKeyPath
	I1026 02:25:05.882161   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetSSHUsername
	I1026 02:25:05.882301   70088 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/newest-cni-274222/id_rsa Username:docker}
	I1026 02:25:05.995396   70088 ssh_runner.go:195] Run: systemctl --version
	I1026 02:25:06.001599   70088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:25:06.167605   70088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:25:06.173346   70088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:25:06.173441   70088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:25:06.189249   70088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:25:06.189277   70088 start.go:495] detecting cgroup driver to use...
	I1026 02:25:06.189346   70088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:25:06.205894   70088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:25:06.220866   70088 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:25:06.220941   70088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:25:06.234774   70088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:25:06.248438   70088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:25:06.383872   70088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:25:06.544420   70088 docker.go:233] disabling docker service ...
	I1026 02:25:06.544490   70088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:25:06.560720   70088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:25:06.577581   70088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:25:06.737350   70088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:25:06.868681   70088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:25:06.882581   70088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:25:06.901073   70088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:25:06.901142   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.911568   70088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:25:06.911654   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.922701   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.933201   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.944861   70088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:25:06.956233   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.967092   70088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.984886   70088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:25:06.995305   70088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:25:07.004892   70088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:25:07.004963   70088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:25:07.018126   70088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:25:07.028293   70088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:25:07.151330   70088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:25:07.248910   70088 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:25:07.249047   70088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:25:07.253645   70088 start.go:563] Will wait 60s for crictl version
	I1026 02:25:07.253710   70088 ssh_runner.go:195] Run: which crictl
	I1026 02:25:07.257713   70088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:25:07.298451   70088 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:25:07.298559   70088 ssh_runner.go:195] Run: crio --version
	I1026 02:25:07.328477   70088 ssh_runner.go:195] Run: crio --version
	I1026 02:25:07.357950   70088 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:25:07.359485   70088 main.go:141] libmachine: (newest-cni-274222) Calling .GetIP
	I1026 02:25:07.362472   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:07.362896   70088 main.go:141] libmachine: (newest-cni-274222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:6d:d4", ip: ""} in network mk-newest-cni-274222: {Iface:virbr1 ExpiryTime:2024-10-26 03:24:55 +0000 UTC Type:0 Mac:52:54:00:d3:6d:d4 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-274222 Clientid:01:52:54:00:d3:6d:d4}
	I1026 02:25:07.362922   70088 main.go:141] libmachine: (newest-cni-274222) DBG | domain newest-cni-274222 has defined IP address 192.168.39.202 and MAC address 52:54:00:d3:6d:d4 in network mk-newest-cni-274222
	I1026 02:25:07.363192   70088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 02:25:07.367286   70088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:25:07.381487   70088 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.228854648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=770586ff-88e4-4c53-a9c3-dd02ba88a423 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.230964511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fbfc1d6-a4f0-451c-9075-3139fa2518ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.231526867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909510231492981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fbfc1d6-a4f0-451c-9075-3139fa2518ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.232200501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0cd6c6f-b008-4c85-a4cf-13598ea6fc4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.232295264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0cd6c6f-b008-4c85-a4cf-13598ea6fc4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.232661803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0cd6c6f-b008-4c85-a4cf-13598ea6fc4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.273807809Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4ab4ce5b-da02-4b0b-bbe0-ce5076a3d4fa name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.274063380Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&PodSandboxMetadata{Name:busybox,Uid:34789ee5-dad1-4115-b92d-39279ef3891c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908344120185686,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.136165370Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-4bxg2,Uid:6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17299083440251708
02,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.136167288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:847a98f0f2386927ecfa624ee37d8a7da77bb5265e755dab745fa46974a6c032,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-kwrk2,Uid:25c9f457-5112-4b5b-8a28-6cb290f5ebdf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908342226243348,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-kwrk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c9f457-5112-4b5b-8a28-6cb290f5ebdf,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-26T02:05:36.1
36163062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908336451660420,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-26T02:05:36.136164350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&PodSandboxMetadata{Name:kube-proxy-z7nrz,Uid:f9041b89-8769-4652-8d39-0982091ffc7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908336443517809,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc7c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-10-26T02:05:36.136160777Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-093148,Uid:ccf022757e3de98e7b0dc46aec18ce11,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332644610155,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.9:2379,kubernetes.io/config.hash: ccf022757e3de98e7b0dc46aec18ce11,kubernetes.io/config.seen: 2024-10-26T02:05:32.154325128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-093148,Ui
d:b11f585fa774eedc4c512138bd241fad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332643071563,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.9:8443,kubernetes.io/config.hash: b11f585fa774eedc4c512138bd241fad,kubernetes.io/config.seen: 2024-10-26T02:05:32.135986289Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-093148,Uid:dc33dc3fa197cefb0ec44ae046e226aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332641458665,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc33dc3fa197cefb0ec44ae046e226aa,kubernetes.io/config.seen: 2024-10-26T02:05:32.135991504Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-093148,Uid:0606e52df31155c2078e142a34e4ce34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1729908332640187433,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0606e52df31155c2078e142a34e4ce34,kubern
etes.io/config.seen: 2024-10-26T02:05:32.135990506Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4ab4ce5b-da02-4b0b-bbe0-ce5076a3d4fa name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.275026689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fa3bbba-a676-4866-9597-7a45a89d2a02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.275100332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fa3bbba-a676-4866-9597-7a45a89d2a02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.275901131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fa3bbba-a676-4866-9597-7a45a89d2a02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.280208611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d039bca-cbd4-474c-ac66-f374e8c3ed31 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.280350698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d039bca-cbd4-474c-ac66-f374e8c3ed31 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.283869194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c47b5b1-c810-4bcd-bb5c-2d34d394be4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.284318142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909510284293786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c47b5b1-c810-4bcd-bb5c-2d34d394be4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.284929873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0c28b3f-dc6a-4d07-89c7-3932604c1fe1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.285017340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0c28b3f-dc6a-4d07-89c7-3932604c1fe1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.285293688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0c28b3f-dc6a-4d07-89c7-3932604c1fe1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.318686682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a00b8e4-494e-4439-b96b-dee8f28692d9 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.318816147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a00b8e4-494e-4439-b96b-dee8f28692d9 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.320063913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b09de084-2b4a-4d1e-a6e9-0daeade6f50a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.320907295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909510320828340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b09de084-2b4a-4d1e-a6e9-0daeade6f50a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.321640868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bdc5bdd-400c-451c-8118-7cfa54dbec23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.321730615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bdc5bdd-400c-451c-8118-7cfa54dbec23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:25:10 no-preload-093148 crio[707]: time="2024-10-26 02:25:10.322011680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729908367391624921,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d23a8f36bb1fe2584be1d4740528515bd6c4a38c8e4cbfb7c9bb367e8ac1e2,PodSandboxId:27b69c8ae1d86c778c72e7c6bd0e0813d0d6bfdd6e2afe46550a574cdd737380,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729908347363073734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34789ee5-dad1-4115-b92d-39279ef3891c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0,PodSandboxId:fafa599cf7d015aa7b52ad2098de56c8ff177ae440a165661857ee496eb55f3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729908344219658485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bxg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d00ff8f-b1c5-4d37-bb5a-48874ca5fc31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45,PodSandboxId:1c708f1cd9cb50d5ebe6f2f2a932b9d2ef0b360b41fd7c360591481e2ae72b70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729908336576228072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
7f5b94f-ba28-42f6-a8bf-1e7ab4248537,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff,PodSandboxId:8da5a57e4ecd0f232bfde887b487d26041d00e9f312b073d740c097d1f7287aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729908336547148198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z7nrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9041b89-8769-4652-8d39-0982091ffc
7c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be,PodSandboxId:0decf26c87177916c6000ec3153146f7ec0d84429e35e3f76557dd0d700700da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729908332892088330,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc33dc3fa197cefb0ec44ae046e226aa,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01,PodSandboxId:b1729a0b3728d5dbf05359004ffdec2a30272fe12697be229fb82ded5008b1f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729908332884732779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf022757e3de98e7b0dc46aec18ce11,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454,PodSandboxId:1ef21846a61143cc1bd02e902a029cc61367949d36b210b0fd6f2124a698dc24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729908332817000326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0606e52df31155c2078e142a34e4ce34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e,PodSandboxId:7bf40987a87ffe2e0eecafb6ecba68a8252a0033e1c70a7b1f64502f9de9fb6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729908332785750262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-093148,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11f585fa774eedc4c512138bd241fad,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bdc5bdd-400c-451c-8118-7cfa54dbec23 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff836e5f3f5bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   1c708f1cd9cb5       storage-provisioner
	f9d23a8f36bb1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   27b69c8ae1d86       busybox
	c7f75959e8826       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   fafa599cf7d01       coredns-7c65d6cfc9-4bxg2
	ae236de084984       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   1c708f1cd9cb5       storage-provisioner
	8c15e7d230254       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      19 minutes ago      Running             kube-proxy                1                   8da5a57e4ecd0       kube-proxy-z7nrz
	ab6ce981ea7a7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      19 minutes ago      Running             kube-scheduler            1                   0decf26c87177       kube-scheduler-no-preload-093148
	1bcc48b027240       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   b1729a0b3728d       etcd-no-preload-093148
	dad51df9ec4db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      19 minutes ago      Running             kube-controller-manager   1                   1ef21846a6114       kube-controller-manager-no-preload-093148
	e712dd7959873       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      19 minutes ago      Running             kube-apiserver            1                   7bf40987a87ff       kube-apiserver-no-preload-093148
	
	
	==> coredns [c7f75959e8826d0c71c23e134e6977940bdf5864b873f375af935987a8ffcaf0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60061 - 55559 "HINFO IN 1778746441980941812.3268527977647942046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013346236s
	
	
	==> describe nodes <==
	Name:               no-preload-093148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-093148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=no-preload-093148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_56_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-093148
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:25:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:21:25 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:21:25 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:21:25 +0000   Sat, 26 Oct 2024 01:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:21:25 +0000   Sat, 26 Oct 2024 02:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.9
	  Hostname:    no-preload-093148
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 386f8ff219bc4aa1a29c9a5b22a14fb6
	  System UUID:                386f8ff2-19bc-4aa1-a29c-9a5b22a14fb6
	  Boot ID:                    935ea570-396a-4311-bfbd-b623b11605f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-4bxg2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-093148                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-093148             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-093148    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-z7nrz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-no-preload-093148             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-kwrk2              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-093148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-093148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-093148 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-093148 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-093148 event: Registered Node no-preload-093148 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-093148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-093148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-093148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-093148 event: Registered Node no-preload-093148 in Controller
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057401] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039826] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct26 02:05] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.031897] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.468018] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.019089] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.063764] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051693] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.214288] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.117206] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.265646] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +15.853517] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.067913] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.737164] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +3.705239] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.325506] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +3.330017] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.137800] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [1bcc48b02724080a9b0601109fe9d10e890827125b1fc0a2e6677c9938780b01] <==
	{"level":"warn","ts":"2024-10-26T02:11:55.385496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.467501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:55.385583Z","caller":"traceutil/trace.go:171","msg":"trace[896981015] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:917; }","duration":"216.567194ms","start":"2024-10-26T02:11:55.168998Z","end":"2024-10-26T02:11:55.385565Z","steps":["trace[896981015] 'range keys from in-memory index tree'  (duration: 216.453553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.385693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.602512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:11:55.385737Z","caller":"traceutil/trace.go:171","msg":"trace[69896837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:917; }","duration":"368.654909ms","start":"2024-10-26T02:11:55.017074Z","end":"2024-10-26T02:11:55.385729Z","steps":["trace[69896837] 'range keys from in-memory index tree'  (duration: 368.564295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:11:55.385804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:11:55.017040Z","time spent":"368.718908ms","remote":"127.0.0.1:51036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-26T02:15:34.894212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":849}
	{"level":"info","ts":"2024-10-26T02:15:34.904853Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":849,"took":"10.178828ms","hash":3803687416,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2650112,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-26T02:15:34.904922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3803687416,"revision":849,"compact-revision":-1}
	{"level":"info","ts":"2024-10-26T02:20:34.901920Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1091}
	{"level":"info","ts":"2024-10-26T02:20:34.906537Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1091,"took":"4.292938ms","hash":446261458,"current-db-size-bytes":2650112,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-26T02:20:34.906612Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":446261458,"revision":1091,"compact-revision":849}
	{"level":"warn","ts":"2024-10-26T02:20:37.818935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"361.973152ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7857535356434641776 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.9\" mod_revision:1329 > success:<request_put:<key:\"/registry/masterleases/192.168.50.9\" value_size:65 lease:7857535356434641773 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.9\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-26T02:20:37.819259Z","caller":"traceutil/trace.go:171","msg":"trace[1460563108] linearizableReadLoop","detail":"{readStateIndex:1562; appliedIndex:1560; }","duration":"337.463635ms","start":"2024-10-26T02:20:37.481767Z","end":"2024-10-26T02:20:37.819231Z","steps":["trace[1460563108] 'read index received'  (duration: 336.372887ms)","trace[1460563108] 'applied index is now lower than readState.Index'  (duration: 1.090128ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-26T02:20:37.819369Z","caller":"traceutil/trace.go:171","msg":"trace[1629170299] transaction","detail":"{read_only:false; response_revision:1338; number_of_response:1; }","duration":"487.216598ms","start":"2024-10-26T02:20:37.332143Z","end":"2024-10-26T02:20:37.819359Z","steps":["trace[1629170299] 'process raft request'  (duration: 487.011028ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:20:37.819485Z","caller":"traceutil/trace.go:171","msg":"trace[1084602268] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"489.90871ms","start":"2024-10-26T02:20:37.329562Z","end":"2024-10-26T02:20:37.819470Z","steps":["trace[1084602268] 'process raft request'  (duration: 126.488556ms)","trace[1084602268] 'compare'  (duration: 361.844896ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:20:37.819500Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:37.332129Z","time spent":"487.318373ms","remote":"127.0.0.1:51014","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1336 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-26T02:20:37.819619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.502339ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-26T02:20:37.819656Z","caller":"traceutil/trace.go:171","msg":"trace[690171635] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:1338; }","duration":"336.536939ms","start":"2024-10-26T02:20:37.483109Z","end":"2024-10-26T02:20:37.819646Z","steps":["trace[690171635] 'agreement among raft nodes before linearized reading'  (duration: 336.479215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:20:37.819717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:37.483085Z","time spent":"336.625628ms","remote":"127.0.0.1:51314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":31,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	{"level":"warn","ts":"2024-10-26T02:20:37.819567Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:37.329533Z","time spent":"489.997768ms","remote":"127.0.0.1:50884","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.9\" mod_revision:1329 > success:<request_put:<key:\"/registry/masterleases/192.168.50.9\" value_size:65 lease:7857535356434641773 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.9\" > >"}
	{"level":"warn","ts":"2024-10-26T02:20:37.819846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"338.078839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:20:37.819877Z","caller":"traceutil/trace.go:171","msg":"trace[1044602117] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1338; }","duration":"338.108214ms","start":"2024-10-26T02:20:37.481762Z","end":"2024-10-26T02:20:37.819871Z","steps":["trace[1044602117] 'agreement among raft nodes before linearized reading'  (duration: 338.067715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:20:37.819893Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:20:37.481722Z","time spent":"338.167417ms","remote":"127.0.0.1:50842","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-26T02:20:37.820031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.10465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-26T02:20:37.820060Z","caller":"traceutil/trace.go:171","msg":"trace[1021190837] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1338; }","duration":"132.135265ms","start":"2024-10-26T02:20:37.687920Z","end":"2024-10-26T02:20:37.820055Z","steps":["trace[1021190837] 'agreement among raft nodes before linearized reading'  (duration: 132.091311ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:25:10 up 20 min,  0 users,  load average: 0.07, 0.08, 0.08
	Linux no-preload-093148 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e712dd79598730a1ecc1b182f5b609a9ce80e4556bd181a7659cb03e703e755e] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:20:37.102807       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:20:37.102902       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:20:37.104039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:20:37.104068       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:21:37.105109       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 02:21:37.105327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:21:37.105465       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1026 02:21:37.105539       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:21:37.106639       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:21:37.106729       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:23:37.107268       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:23:37.107359       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1026 02:23:37.107481       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:23:37.107532       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:23:37.108493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:23:37.108694       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dad51df9ec4db358e1b9d9a99537f71c9f3d9014239efb2518b97a3bbb0c2454] <==
	E1026 02:20:09.839158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:20:10.320772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:20:39.846317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:20:40.330912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:21:09.852730       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:21:10.338860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:21:25.492558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-093148"
	E1026 02:21:39.859556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:21:40.346203       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:21:54.227294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="195.845µs"
	I1026 02:22:09.223889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.183µs"
	E1026 02:22:09.865699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:22:10.354287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:22:39.870972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:22:40.364954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:23:09.877152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:23:10.373598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:23:39.883983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:23:40.382238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:24:09.889686       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:24:10.390432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:24:39.895708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:24:40.399744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:25:09.902894       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:25:10.407644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8c15e7d230254d2b08235d3851fe04167039fadb8707f70fc8158498a15298ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:05:36.797358       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:05:36.813727       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.9"]
	E1026 02:05:36.813817       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:05:36.849045       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:05:36.849095       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:05:36.849131       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:05:36.851686       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:05:36.852086       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:05:36.852135       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:05:36.854451       1 config.go:199] "Starting service config controller"
	I1026 02:05:36.854885       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:05:36.855136       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:05:36.855171       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:05:36.855990       1 config.go:328] "Starting node config controller"
	I1026 02:05:36.856021       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:05:36.955356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:05:36.955377       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:05:36.956154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ab6ce981ea7a7318d56f1c2cc24dd9adc4d168a59a6758cb3f232fe1385109be] <==
	I1026 02:05:34.028748       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:05:36.009240       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:05:36.009277       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:05:36.009287       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:05:36.009294       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:05:36.070947       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:05:36.070987       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:05:36.077709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:05:36.077817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:05:36.077847       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:05:36.077861       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W1026 02:05:36.085660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 02:05:36.085713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 02:05:36.085762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 02:05:36.085789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1026 02:05:36.085838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 02:05:36.085862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1026 02:05:36.178561       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:23:56 no-preload-093148 kubelet[1425]: E1026 02:23:56.210454    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:24:02 no-preload-093148 kubelet[1425]: E1026 02:24:02.456652    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909442456320325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:02 no-preload-093148 kubelet[1425]: E1026 02:24:02.456676    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909442456320325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:09 no-preload-093148 kubelet[1425]: E1026 02:24:09.211852    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:24:12 no-preload-093148 kubelet[1425]: E1026 02:24:12.458198    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909452457784379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:12 no-preload-093148 kubelet[1425]: E1026 02:24:12.458228    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909452457784379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:22 no-preload-093148 kubelet[1425]: E1026 02:24:22.460865    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909462460145017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:22 no-preload-093148 kubelet[1425]: E1026 02:24:22.461523    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909462460145017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:23 no-preload-093148 kubelet[1425]: E1026 02:24:23.210975    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]: E1026 02:24:32.224773    1425 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]: E1026 02:24:32.464707    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909472463295833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:32 no-preload-093148 kubelet[1425]: E1026 02:24:32.464734    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909472463295833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:38 no-preload-093148 kubelet[1425]: E1026 02:24:38.212120    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:24:42 no-preload-093148 kubelet[1425]: E1026 02:24:42.467000    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909482466350601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:42 no-preload-093148 kubelet[1425]: E1026 02:24:42.467075    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909482466350601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:49 no-preload-093148 kubelet[1425]: E1026 02:24:49.211164    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	Oct 26 02:24:52 no-preload-093148 kubelet[1425]: E1026 02:24:52.469850    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909492468712362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:24:52 no-preload-093148 kubelet[1425]: E1026 02:24:52.470381    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909492468712362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:02 no-preload-093148 kubelet[1425]: E1026 02:25:02.471903    1425 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909502471356761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:02 no-preload-093148 kubelet[1425]: E1026 02:25:02.471953    1425 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909502471356761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:25:04 no-preload-093148 kubelet[1425]: E1026 02:25:04.212477    1425 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kwrk2" podUID="25c9f457-5112-4b5b-8a28-6cb290f5ebdf"
	
	
	==> storage-provisioner [ae236de0849846c41ab90e00d7f67e3d025823026bb4d6ef1aff13f75e59ab45] <==
	I1026 02:05:36.658676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:06:06.663704       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff836e5f3f5bd7feb33d606e1ee07d42ee52d8163c3dc0a9b8f5549e0b464193] <==
	I1026 02:06:07.466563       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:06:07.478270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:06:07.478345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:06:24.880873       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:06:24.881066       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7!
	I1026 02:06:24.882761       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87a9f819-85f4-4c7c-9e1f-5c5d894f2048", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7 became leader
	I1026 02:06:24.981749       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-093148_f5631b76-cc32-4b61-840a-d84782b96ec7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093148 -n no-preload-093148
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-093148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kwrk2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2: exit status 1 (81.74771ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kwrk2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-093148 describe pod metrics-server-6867b74b74-kwrk2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (369.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
E1026 02:23:52.961658   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.33:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.33:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (218.238122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-385716" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-385716 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-385716 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.925µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-385716 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (234.577379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-385716 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-226333                                        | pause-226333                 | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:56 UTC |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:56 UTC | 26 Oct 24 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-093148             | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-767480            | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC | 26 Oct 24 01:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 01:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-385716        | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-093148                  | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-093148                                   | no-preload-093148            | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-767480                 | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-767480                                  | embed-certs-767480           | jenkins | v1.34.0 | 26 Oct 24 01:59 UTC | 26 Oct 24 02:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-385716             | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC | 26 Oct 24 02:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-385716                              | old-k8s-version-385716       | jenkins | v1.34.0 | 26 Oct 24 02:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-970804                           | kubernetes-upgrade-970804    | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-713871 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:11 UTC |
	|         | disable-driver-mounts-713871                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:11 UTC | 26 Oct 24 02:12 UTC |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-661357  | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC | 26 Oct 24 02:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:12 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-661357       | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-661357 | jenkins | v1.34.0 | 26 Oct 24 02:15 UTC |                     |
	|         | default-k8s-diff-port-661357                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:15:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:15:27.297785   67066 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:15:27.297934   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.297945   67066 out.go:358] Setting ErrFile to fd 2...
	I1026 02:15:27.297952   67066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:15:27.298168   67066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:15:27.298737   67066 out.go:352] Setting JSON to false
	I1026 02:15:27.299667   67066 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7067,"bootTime":1729901860,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:15:27.299764   67066 start.go:139] virtualization: kvm guest
	I1026 02:15:27.302194   67066 out.go:177] * [default-k8s-diff-port-661357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:15:27.303883   67066 notify.go:220] Checking for updates...
	I1026 02:15:27.303910   67066 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:15:27.305362   67066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:15:27.307037   67066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:15:27.308350   67066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:15:27.309738   67066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:15:27.311000   67066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:15:27.312448   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:15:27.312903   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.312969   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.328075   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1026 02:15:27.328420   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.328973   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.328995   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.329389   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.329584   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.329870   67066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:15:27.330179   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.330236   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.345446   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I1026 02:15:27.345922   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.346439   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.346465   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.346771   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.346915   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.385240   67066 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 02:15:27.386493   67066 start.go:297] selected driver: kvm2
	I1026 02:15:27.386506   67066 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.386627   67066 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:15:27.387355   67066 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.387437   67066 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:15:27.402972   67066 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:15:27.403447   67066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:15:27.403480   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:15:27.403538   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:15:27.403573   67066 start.go:340] cluster config:
	{Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:15:27.403717   67066 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:15:27.405745   67066 out.go:177] * Starting "default-k8s-diff-port-661357" primary control-plane node in "default-k8s-diff-port-661357" cluster
	I1026 02:15:27.407319   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:15:27.407362   67066 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:15:27.407375   67066 cache.go:56] Caching tarball of preloaded images
	I1026 02:15:27.407472   67066 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:15:27.407487   67066 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:15:27.407612   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:15:27.407850   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:15:27.407893   67066 start.go:364] duration metric: took 24.39µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:15:27.407914   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:15:27.407922   67066 fix.go:54] fixHost starting: 
	I1026 02:15:27.408209   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:15:27.408249   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:15:27.422977   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1026 02:15:27.423350   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:15:27.423824   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:15:27.423847   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:15:27.424171   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:15:27.424338   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.424502   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:15:27.426304   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Running err=<nil>
	W1026 02:15:27.426337   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:15:27.428299   67066 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-661357" VM ...
	I1026 02:15:27.429557   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:15:27.429586   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:15:27.429817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:15:27.432629   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:11:37 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:15:27.433157   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:15:27.433315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:15:27.433540   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433688   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:15:27.433817   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:15:27.433940   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:15:27.434150   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:15:27.434165   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:15:30.317691   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:33.389688   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:39.469675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:42.541741   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:48.625728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:15:51.693782   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:00.813656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:03.885647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:09.965637   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:13.037626   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:19.117681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:22.189689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:28.273657   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:31.341685   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:37.421654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:40.493714   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:46.573667   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:49.645724   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:55.725675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:16:58.797640   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:04.877698   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:07.949690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:14.033654   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:17.101631   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:23.181650   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:26.253675   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:32.333666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:35.405742   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:41.489689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:44.557647   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:50.637659   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:53.709622   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:17:59.789723   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:02.861727   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:08.945680   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:12.013718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:18.093693   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:21.169616   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:27.245681   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:30.317690   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:36.397652   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:39.469689   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:45.549661   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:48.621666   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:54.705716   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:18:57.773712   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:03.853656   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:06.925672   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:13.005700   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:16.077672   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:22.161718   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:25.229728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:31.313674   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:34.381761   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:40.461651   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:43.533728   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:49.613664   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:52.689645   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:19:58.765677   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:20:01.837755   67066 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.18:22: connect: no route to host
	I1026 02:20:04.838824   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:20:04.838856   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:04.839160   67066 buildroot.go:166] provisioning hostname "default-k8s-diff-port-661357"
	I1026 02:20:04.839194   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:04.839412   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:04.840850   67066 machine.go:96] duration metric: took 4m37.411273522s to provisionDockerMachine
	I1026 02:20:04.840889   67066 fix.go:56] duration metric: took 4m37.432968576s for fixHost
	I1026 02:20:04.840895   67066 start.go:83] releasing machines lock for "default-k8s-diff-port-661357", held for 4m37.432989897s
	W1026 02:20:04.840909   67066 start.go:714] error starting host: provision: host is not running
	W1026 02:20:04.840976   67066 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1026 02:20:04.840985   67066 start.go:729] Will try again in 5 seconds ...
	I1026 02:20:09.842689   67066 start.go:360] acquireMachinesLock for default-k8s-diff-port-661357: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:20:09.842791   67066 start.go:364] duration metric: took 60.747µs to acquireMachinesLock for "default-k8s-diff-port-661357"
	I1026 02:20:09.842816   67066 start.go:96] Skipping create...Using existing machine configuration
	I1026 02:20:09.842831   67066 fix.go:54] fixHost starting: 
	I1026 02:20:09.843132   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:09.843155   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:09.858340   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I1026 02:20:09.858814   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:09.859276   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:09.859298   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:09.859609   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:09.859793   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:09.859963   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:09.861770   67066 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661357: state=Stopped err=<nil>
	I1026 02:20:09.861794   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	W1026 02:20:09.861945   67066 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 02:20:09.864154   67066 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-661357" ...
	I1026 02:20:09.865351   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Start
	I1026 02:20:09.865594   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring networks are active...
	I1026 02:20:09.866340   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network default is active
	I1026 02:20:09.866708   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Ensuring network mk-default-k8s-diff-port-661357 is active
	I1026 02:20:09.867181   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Getting domain xml...
	I1026 02:20:09.867849   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Creating domain...
	I1026 02:20:11.157180   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting to get IP...
	I1026 02:20:11.158004   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.158420   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.158479   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.158401   68753 retry.go:31] will retry after 205.32589ms: waiting for machine to come up
	I1026 02:20:11.366215   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.366787   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.366816   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.366743   68753 retry.go:31] will retry after 372.887432ms: waiting for machine to come up
	I1026 02:20:11.741620   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.742196   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:11.742217   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:11.742154   68753 retry.go:31] will retry after 309.993426ms: waiting for machine to come up
	I1026 02:20:12.053939   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.054367   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.054396   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:12.054333   68753 retry.go:31] will retry after 391.94553ms: waiting for machine to come up
	I1026 02:20:12.447938   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.448418   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:12.448442   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:12.448370   68753 retry.go:31] will retry after 658.550669ms: waiting for machine to come up
	I1026 02:20:13.108487   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.109103   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.109129   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:13.109035   68753 retry.go:31] will retry after 709.02963ms: waiting for machine to come up
	I1026 02:20:13.819859   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.820380   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:13.820410   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:13.820328   68753 retry.go:31] will retry after 845.655125ms: waiting for machine to come up
	I1026 02:20:14.667789   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:14.668287   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:14.668315   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:14.668232   68753 retry.go:31] will retry after 1.007484364s: waiting for machine to come up
	I1026 02:20:15.677769   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:15.678274   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:15.678305   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:15.678183   68753 retry.go:31] will retry after 1.820092111s: waiting for machine to come up
	I1026 02:20:17.501043   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:17.501462   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:17.501497   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:17.501456   68753 retry.go:31] will retry after 1.646280238s: waiting for machine to come up
	I1026 02:20:19.150297   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:19.150860   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:19.150887   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:19.150823   68753 retry.go:31] will retry after 2.698451428s: waiting for machine to come up
	I1026 02:20:21.850608   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:21.851011   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:21.851042   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:21.850970   68753 retry.go:31] will retry after 2.282943942s: waiting for machine to come up
	I1026 02:20:24.136310   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:24.136784   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | unable to find current IP address of domain default-k8s-diff-port-661357 in network mk-default-k8s-diff-port-661357
	I1026 02:20:24.136813   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | I1026 02:20:24.136736   68753 retry.go:31] will retry after 3.403699394s: waiting for machine to come up
	I1026 02:20:27.543572   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.544171   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Found IP for machine: 192.168.72.18
	I1026 02:20:27.544200   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserving static IP address...
	I1026 02:20:27.544216   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has current primary IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.544612   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-661357", mac: "52:54:00:0c:41:27", ip: "192.168.72.18"} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.544633   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Reserved static IP address: 192.168.72.18
	I1026 02:20:27.544645   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | skip adding static IP to network mk-default-k8s-diff-port-661357 - found existing host DHCP lease matching {name: "default-k8s-diff-port-661357", mac: "52:54:00:0c:41:27", ip: "192.168.72.18"}
	I1026 02:20:27.544656   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Getting to WaitForSSH function...
	I1026 02:20:27.544667   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Waiting for SSH to be available...
	I1026 02:20:27.547163   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.547543   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.547574   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.547780   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH client type: external
	I1026 02:20:27.547816   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa (-rw-------)
	I1026 02:20:27.547858   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:20:27.547876   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | About to run SSH command:
	I1026 02:20:27.547890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | exit 0
	I1026 02:20:27.669305   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | SSH cmd err, output: <nil>: 
	I1026 02:20:27.669693   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetConfigRaw
	I1026 02:20:27.670363   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:27.673029   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.673439   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.673468   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.673720   67066 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/config.json ...
	I1026 02:20:27.673952   67066 machine.go:93] provisionDockerMachine start ...
	I1026 02:20:27.673973   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:27.674200   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.676638   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.676982   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.677013   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.677123   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.677299   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.677481   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.677616   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.677769   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.677965   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.677977   67066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 02:20:27.777578   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 02:20:27.777607   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:27.777854   67066 buildroot.go:166] provisioning hostname "default-k8s-diff-port-661357"
	I1026 02:20:27.777884   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:27.778079   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.780842   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.781223   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.781247   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.781467   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.781649   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.781786   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.781898   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.782054   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.782256   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.782281   67066 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661357 && echo "default-k8s-diff-port-661357" | sudo tee /etc/hostname
	I1026 02:20:27.896677   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661357
	
	I1026 02:20:27.896708   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:27.899493   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.899870   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:27.899936   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:27.900124   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:27.900328   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.900496   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:27.900663   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:27.900904   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:27.901120   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:27.901137   67066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661357' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661357/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661357' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:20:28.011530   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:20:28.011565   67066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:20:28.011596   67066 buildroot.go:174] setting up certificates
	I1026 02:20:28.011606   67066 provision.go:84] configureAuth start
	I1026 02:20:28.011614   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetMachineName
	I1026 02:20:28.011917   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:28.014919   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.015327   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.015353   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.015542   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.017631   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.017987   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.018015   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.018230   67066 provision.go:143] copyHostCerts
	I1026 02:20:28.018310   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:20:28.018328   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:20:28.018405   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:20:28.018513   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:20:28.018523   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:20:28.018562   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:20:28.018668   67066 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:20:28.018681   67066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:20:28.018718   67066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:20:28.018784   67066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661357 san=[127.0.0.1 192.168.72.18 default-k8s-diff-port-661357 localhost minikube]
	I1026 02:20:28.283116   67066 provision.go:177] copyRemoteCerts
	I1026 02:20:28.283179   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:20:28.283203   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.285996   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.286331   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.286355   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.286505   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.286714   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.286858   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.286960   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.367395   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:20:28.391783   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 02:20:28.414947   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:20:28.440556   67066 provision.go:87] duration metric: took 428.936668ms to configureAuth
	I1026 02:20:28.440591   67066 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:20:28.440783   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:20:28.440865   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.443825   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.444235   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.444281   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.444450   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.444683   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.444890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.445056   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.445252   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:28.445484   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:28.445513   67066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:20:28.657448   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:20:28.657478   67066 machine.go:96] duration metric: took 983.512613ms to provisionDockerMachine
	I1026 02:20:28.657490   67066 start.go:293] postStartSetup for "default-k8s-diff-port-661357" (driver="kvm2")
	I1026 02:20:28.657501   67066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:20:28.657522   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.657861   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:20:28.657890   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.660571   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.660926   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.660959   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.661118   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.661298   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.661472   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.661620   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.740276   67066 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:20:28.744331   67066 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:20:28.744356   67066 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:20:28.744454   67066 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:20:28.744564   67066 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:20:28.744699   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:20:28.754074   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:20:28.776812   67066 start.go:296] duration metric: took 119.305158ms for postStartSetup
	I1026 02:20:28.776859   67066 fix.go:56] duration metric: took 18.93402724s for fixHost
	I1026 02:20:28.776882   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.779953   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.780312   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.780340   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.780524   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.780741   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.780886   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.781041   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.781233   67066 main.go:141] libmachine: Using SSH client type: native
	I1026 02:20:28.781510   67066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I1026 02:20:28.781527   67066 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:20:28.882528   67066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909228.857250366
	
	I1026 02:20:28.882548   67066 fix.go:216] guest clock: 1729909228.857250366
	I1026 02:20:28.882556   67066 fix.go:229] Guest: 2024-10-26 02:20:28.857250366 +0000 UTC Remote: 2024-10-26 02:20:28.776864275 +0000 UTC m=+301.517684501 (delta=80.386091ms)
	I1026 02:20:28.882576   67066 fix.go:200] guest clock delta is within tolerance: 80.386091ms
	I1026 02:20:28.882581   67066 start.go:83] releasing machines lock for "default-k8s-diff-port-661357", held for 19.03978033s
	I1026 02:20:28.882597   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.882848   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:28.885339   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.885691   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.885721   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.885871   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886321   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886498   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:28.886579   67066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:20:28.886634   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.886776   67066 ssh_runner.go:195] Run: cat /version.json
	I1026 02:20:28.886803   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:28.889458   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.889630   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.889839   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.889865   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.890022   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.890032   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:28.890056   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:28.890242   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.890243   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:28.890401   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.890466   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:28.890581   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:28.890673   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:28.890982   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:29.001770   67066 ssh_runner.go:195] Run: systemctl --version
	I1026 02:20:29.007670   67066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:20:29.150271   67066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:20:29.156252   67066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:20:29.156336   67066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:20:29.172267   67066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:20:29.172292   67066 start.go:495] detecting cgroup driver to use...
	I1026 02:20:29.172352   67066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:20:29.188769   67066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:20:29.203250   67066 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:20:29.203306   67066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:20:29.217222   67066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:20:29.230972   67066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:20:29.346698   67066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:20:29.520440   67066 docker.go:233] disabling docker service ...
	I1026 02:20:29.520532   67066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:20:29.534512   67066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:20:29.547618   67066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:20:29.674170   67066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:20:29.790614   67066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:20:29.805113   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:20:29.823385   67066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:20:29.823459   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.834548   67066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:20:29.834612   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.845635   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.855964   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.867741   67066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:20:29.878595   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.889257   67066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.906208   67066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:20:29.917146   67066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:20:29.926950   67066 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:20:29.927020   67066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:20:29.941373   67066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:20:29.951206   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:30.066163   67066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:20:30.155026   67066 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:20:30.155112   67066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:20:30.159790   67066 start.go:563] Will wait 60s for crictl version
	I1026 02:20:30.159849   67066 ssh_runner.go:195] Run: which crictl
	I1026 02:20:30.163600   67066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:20:30.203002   67066 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:20:30.203078   67066 ssh_runner.go:195] Run: crio --version
	I1026 02:20:30.229655   67066 ssh_runner.go:195] Run: crio --version
	I1026 02:20:30.260019   67066 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:20:30.261218   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetIP
	I1026 02:20:30.264497   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:30.264886   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:30.264907   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:30.265160   67066 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 02:20:30.269055   67066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:20:30.281497   67066 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:20:30.281649   67066 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:20:30.281743   67066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:20:30.317981   67066 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:20:30.318061   67066 ssh_runner.go:195] Run: which lz4
	I1026 02:20:30.321759   67066 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:20:30.325850   67066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:20:30.325896   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:20:31.651772   67066 crio.go:462] duration metric: took 1.330041951s to copy over tarball
	I1026 02:20:31.651888   67066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:20:33.804858   67066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.152934864s)
	I1026 02:20:33.804901   67066 crio.go:469] duration metric: took 2.153098897s to extract the tarball
	I1026 02:20:33.804912   67066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:20:33.841380   67066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:20:33.884198   67066 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:20:33.884234   67066 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:20:33.884244   67066 kubeadm.go:934] updating node { 192.168.72.18 8444 v1.31.2 crio true true} ...
	I1026 02:20:33.884372   67066 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-661357 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 02:20:33.884455   67066 ssh_runner.go:195] Run: crio config
	I1026 02:20:33.938946   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:20:33.938971   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:20:33.938983   67066 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:20:33.939013   67066 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.18 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661357 NodeName:default-k8s-diff-port-661357 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:20:33.939158   67066 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661357"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:20:33.939231   67066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:20:33.949891   67066 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:20:33.949958   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:20:33.959789   67066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1026 02:20:33.976623   67066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:20:33.991359   67066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1026 02:20:34.007135   67066 ssh_runner.go:195] Run: grep 192.168.72.18	control-plane.minikube.internal$ /etc/hosts
	I1026 02:20:34.010559   67066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:20:34.021707   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:34.150232   67066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:20:34.177824   67066 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357 for IP: 192.168.72.18
	I1026 02:20:34.177849   67066 certs.go:194] generating shared ca certs ...
	I1026 02:20:34.177869   67066 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:34.178034   67066 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:20:34.178097   67066 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:20:34.178112   67066 certs.go:256] generating profile certs ...
	I1026 02:20:34.178241   67066 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/client.key
	I1026 02:20:34.178341   67066 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key.29c0eec6
	I1026 02:20:34.178401   67066 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key
	I1026 02:20:34.178613   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:20:34.178665   67066 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:20:34.178677   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:20:34.178709   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:20:34.178747   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:20:34.178780   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:20:34.178839   67066 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:20:34.179773   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:20:34.228350   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:20:34.274677   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:20:34.312372   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:20:34.343042   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 02:20:34.369490   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:20:34.392203   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:20:34.414716   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/default-k8s-diff-port-661357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:20:34.439171   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:20:34.462507   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:20:34.484198   67066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:20:34.506399   67066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:20:34.521925   67066 ssh_runner.go:195] Run: openssl version
	I1026 02:20:34.527762   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:20:34.537980   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.542334   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.542393   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:20:34.548210   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:20:34.558367   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:20:34.568179   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.572155   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.572207   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:20:34.577337   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:20:34.586783   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:20:34.596539   67066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.600705   67066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.600751   67066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:20:34.606006   67066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:20:34.615835   67066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:20:34.619908   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 02:20:34.625291   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 02:20:34.630936   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 02:20:34.636410   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 02:20:34.641881   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 02:20:34.648366   67066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 02:20:34.653688   67066 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-661357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-661357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:20:34.653770   67066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:20:34.653819   67066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:20:34.692272   67066 cri.go:89] found id: ""
	I1026 02:20:34.692362   67066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:20:34.702791   67066 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 02:20:34.702811   67066 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 02:20:34.702858   67066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 02:20:34.712118   67066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 02:20:34.713520   67066 kubeconfig.go:125] found "default-k8s-diff-port-661357" server: "https://192.168.72.18:8444"
	I1026 02:20:34.716689   67066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 02:20:34.725334   67066 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.18
	I1026 02:20:34.725362   67066 kubeadm.go:1160] stopping kube-system containers ...
	I1026 02:20:34.725374   67066 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 02:20:34.725440   67066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:20:34.757678   67066 cri.go:89] found id: ""
	I1026 02:20:34.757745   67066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 02:20:34.772453   67066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:20:34.781104   67066 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:20:34.781125   67066 kubeadm.go:157] found existing configuration files:
	
	I1026 02:20:34.781173   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1026 02:20:34.789342   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:20:34.789396   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:20:34.797951   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1026 02:20:34.805987   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:20:34.806057   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:20:34.814807   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1026 02:20:34.822626   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:20:34.822693   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:20:34.830967   67066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1026 02:20:34.839120   67066 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:20:34.839177   67066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:20:34.847796   67066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:20:34.856342   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:34.956523   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:35.768693   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:35.968797   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:36.040536   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:36.130180   67066 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:20:36.130300   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:36.630495   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:37.130625   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:37.630728   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:38.130795   67066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:20:38.151595   67066 api_server.go:72] duration metric: took 2.02141435s to wait for apiserver process to appear ...
	I1026 02:20:38.151637   67066 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:20:38.151662   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:38.152162   67066 api_server.go:269] stopped: https://192.168.72.18:8444/healthz: Get "https://192.168.72.18:8444/healthz": dial tcp 192.168.72.18:8444: connect: connection refused
	I1026 02:20:38.651789   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:40.769681   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:20:40.769741   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:20:40.769766   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:40.810385   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 02:20:40.810422   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 02:20:41.152677   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:41.164322   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:20:41.164353   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:20:41.651791   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:41.658110   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 02:20:41.658146   67066 api_server.go:103] status: https://192.168.72.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 02:20:42.151728   67066 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8444/healthz ...
	I1026 02:20:42.163110   67066 api_server.go:279] https://192.168.72.18:8444/healthz returned 200:
	ok
	I1026 02:20:42.170287   67066 api_server.go:141] control plane version: v1.31.2
	I1026 02:20:42.170314   67066 api_server.go:131] duration metric: took 4.018669008s to wait for apiserver health ...
	I1026 02:20:42.170324   67066 cni.go:84] Creating CNI manager for ""
	I1026 02:20:42.170332   67066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 02:20:42.172451   67066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:20:42.173984   67066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:20:42.185616   67066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:20:42.223096   67066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:20:42.234788   67066 system_pods.go:59] 8 kube-system pods found
	I1026 02:20:42.234847   67066 system_pods.go:61] "coredns-7c65d6cfc9-xpxp4" [d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 02:20:42.234863   67066 system_pods.go:61] "etcd-default-k8s-diff-port-661357" [e0edffc7-d9fa-45e0-9250-3ea465d61e01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 02:20:42.234878   67066 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-661357" [87332b2c-b6bd-4008-8db7-76b60f782d8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 02:20:42.234892   67066 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-661357" [4eb18006-0e9c-466c-8be9-c16250a8851b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 02:20:42.234905   67066 system_pods.go:61] "kube-proxy-c947q" [e41c6a1e-1a8e-4c49-93ff-e0c60a87ea69] Running
	I1026 02:20:42.234914   67066 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-661357" [af14b2f6-20bd-4f05-9a9d-ea1ca7e53887] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 02:20:42.234924   67066 system_pods.go:61] "metrics-server-6867b74b74-jkl5g" [023bd779-83b7-42ef-893d-f7ab70f08ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 02:20:42.234940   67066 system_pods.go:61] "storage-provisioner" [90c86915-4d74-4774-b8cd-86bf37672a55] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 02:20:42.234952   67066 system_pods.go:74] duration metric: took 11.834154ms to wait for pod list to return data ...
	I1026 02:20:42.234964   67066 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:20:42.240100   67066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:20:42.240138   67066 node_conditions.go:123] node cpu capacity is 2
	I1026 02:20:42.240153   67066 node_conditions.go:105] duration metric: took 5.181139ms to run NodePressure ...
	I1026 02:20:42.240175   67066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 02:20:42.505336   67066 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1026 02:20:42.510487   67066 kubeadm.go:739] kubelet initialised
	I1026 02:20:42.510509   67066 kubeadm.go:740] duration metric: took 5.142371ms waiting for restarted kubelet to initialise ...
	I1026 02:20:42.510517   67066 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:42.515070   67066 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.519704   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.519733   67066 pod_ready.go:82] duration metric: took 4.641295ms for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.519745   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.519754   67066 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.523349   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.523371   67066 pod_ready.go:82] duration metric: took 3.607793ms for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.523389   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.523404   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.527098   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.527122   67066 pod_ready.go:82] duration metric: took 3.706328ms for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.527134   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.527147   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:42.626144   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.626175   67066 pod_ready.go:82] duration metric: took 99.014479ms for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:42.626187   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:42.626194   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.026245   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-proxy-c947q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.026277   67066 pod_ready.go:82] duration metric: took 400.075235ms for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.026289   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-proxy-c947q" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.026298   67066 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.426236   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.426268   67066 pod_ready.go:82] duration metric: took 399.958763ms for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.426285   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.426295   67066 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:43.827259   67066 pod_ready.go:98] node "default-k8s-diff-port-661357" hosting pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.827290   67066 pod_ready.go:82] duration metric: took 400.983426ms for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	E1026 02:20:43.827305   67066 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-661357" hosting pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:43.827316   67066 pod_ready.go:39] duration metric: took 1.316791104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:43.827333   67066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:20:43.839420   67066 ops.go:34] apiserver oom_adj: -16
	I1026 02:20:43.839452   67066 kubeadm.go:597] duration metric: took 9.136633662s to restartPrimaryControlPlane
	I1026 02:20:43.839468   67066 kubeadm.go:394] duration metric: took 9.185783947s to StartCluster
	I1026 02:20:43.839492   67066 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:43.839591   67066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:20:43.842166   67066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:20:43.842434   67066 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.18 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:20:43.842534   67066 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:20:43.842640   67066 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661357"
	I1026 02:20:43.842660   67066 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-661357"
	I1026 02:20:43.842667   67066 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661357"
	W1026 02:20:43.842677   67066 addons.go:243] addon storage-provisioner should already be in state true
	I1026 02:20:43.842693   67066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661357"
	I1026 02:20:43.842689   67066 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-661357"
	I1026 02:20:43.842708   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.842713   67066 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-661357"
	W1026 02:20:43.842721   67066 addons.go:243] addon metrics-server should already be in state true
	I1026 02:20:43.842737   67066 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:20:43.842749   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.843146   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843163   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843166   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.843183   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.843188   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.843200   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.844170   67066 out.go:177] * Verifying Kubernetes components...
	I1026 02:20:43.845572   67066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:20:43.859423   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I1026 02:20:43.859946   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.860482   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.860508   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.860900   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.861533   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.861580   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.863282   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I1026 02:20:43.863431   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I1026 02:20:43.863891   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.863911   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.864365   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.864385   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.864389   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.864407   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.864769   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.864788   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.864985   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.865314   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.865353   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.868025   67066 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-661357"
	W1026 02:20:43.868041   67066 addons.go:243] addon default-storageclass should already be in state true
	I1026 02:20:43.868063   67066 host.go:66] Checking if "default-k8s-diff-port-661357" exists ...
	I1026 02:20:43.868321   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.868357   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.877922   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I1026 02:20:43.878359   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.878855   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.878868   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.879138   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.879294   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.880925   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.882414   67066 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 02:20:43.883480   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 02:20:43.883498   67066 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 02:20:43.883516   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.886539   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.886936   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.886958   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.887173   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.887326   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.887469   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.887593   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:43.889753   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I1026 02:20:43.890268   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.890810   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.890840   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.891162   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.891350   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.892902   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.894549   67066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:20:43.895782   67066 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:20:43.895797   67066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:20:43.895814   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.899634   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.900029   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.900047   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.900244   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.900368   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.900505   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.900633   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:43.907056   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I1026 02:20:43.907446   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.908340   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.908359   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.908692   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.910127   67066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:20:43.910158   67066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:20:43.926987   67066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I1026 02:20:43.927446   67066 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:20:43.929170   67066 main.go:141] libmachine: Using API Version  1
	I1026 02:20:43.929188   67066 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:20:43.929754   67066 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:20:43.930383   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetState
	I1026 02:20:43.932008   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .DriverName
	I1026 02:20:43.932199   67066 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:20:43.932215   67066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:20:43.932233   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHHostname
	I1026 02:20:43.934609   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.934877   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:41:27", ip: ""} in network mk-default-k8s-diff-port-661357: {Iface:virbr4 ExpiryTime:2024-10-26 03:20:20 +0000 UTC Type:0 Mac:52:54:00:0c:41:27 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:default-k8s-diff-port-661357 Clientid:01:52:54:00:0c:41:27}
	I1026 02:20:43.934900   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | domain default-k8s-diff-port-661357 has defined IP address 192.168.72.18 and MAC address 52:54:00:0c:41:27 in network mk-default-k8s-diff-port-661357
	I1026 02:20:43.935066   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHPort
	I1026 02:20:43.935213   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHKeyPath
	I1026 02:20:43.935335   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .GetSSHUsername
	I1026 02:20:43.935519   67066 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/default-k8s-diff-port-661357/id_rsa Username:docker}
	I1026 02:20:44.079965   67066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:20:44.101438   67066 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:20:44.157295   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:20:44.253190   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 02:20:44.253216   67066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 02:20:44.263508   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:20:44.318176   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 02:20:44.318219   67066 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 02:20:44.398217   67066 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:20:44.398239   67066 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 02:20:44.491239   67066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 02:20:44.623927   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.623955   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.624363   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.624383   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:44.624396   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.624405   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.624622   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.624639   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:44.624642   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:44.631038   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:44.631055   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:44.631301   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:44.631320   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.235238   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.235265   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.235592   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.235618   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.235628   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.235627   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.235637   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.235905   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.235947   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.235966   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.406802   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.406826   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.407169   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.407188   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.407197   67066 main.go:141] libmachine: Making call to close driver server
	I1026 02:20:45.407204   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) Calling .Close
	I1026 02:20:45.407434   67066 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:20:45.407449   67066 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:20:45.407460   67066 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-661357"
	I1026 02:20:45.407477   67066 main.go:141] libmachine: (default-k8s-diff-port-661357) DBG | Closing plugin on server side
	I1026 02:20:45.409386   67066 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1026 02:20:45.410709   67066 addons.go:510] duration metric: took 1.568186199s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1026 02:20:46.105327   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:48.105495   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:50.105708   67066 node_ready.go:53] node "default-k8s-diff-port-661357" has status "Ready":"False"
	I1026 02:20:51.105506   67066 node_ready.go:49] node "default-k8s-diff-port-661357" has status "Ready":"True"
	I1026 02:20:51.105529   67066 node_ready.go:38] duration metric: took 7.004055158s for node "default-k8s-diff-port-661357" to be "Ready" ...
	I1026 02:20:51.105538   67066 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:20:51.110758   67066 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:51.116405   67066 pod_ready.go:93] pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:51.116427   67066 pod_ready.go:82] duration metric: took 5.642161ms for pod "coredns-7c65d6cfc9-xpxp4" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:51.116440   67066 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.124461   67066 pod_ready.go:93] pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.124489   67066 pod_ready.go:82] duration metric: took 2.008040829s for pod "etcd-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.124503   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.130609   67066 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.130634   67066 pod_ready.go:82] duration metric: took 6.121774ms for pod "kube-apiserver-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.130646   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.134438   67066 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.134457   67066 pod_ready.go:82] duration metric: took 3.804731ms for pod "kube-controller-manager-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.134466   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.137983   67066 pod_ready.go:93] pod "kube-proxy-c947q" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:53.137999   67066 pod_ready.go:82] duration metric: took 3.52735ms for pod "kube-proxy-c947q" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:53.138008   67066 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:54.705479   67066 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace has status "Ready":"True"
	I1026 02:20:54.705508   67066 pod_ready.go:82] duration metric: took 1.567492895s for pod "kube-scheduler-default-k8s-diff-port-661357" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:54.705524   67066 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace to be "Ready" ...
	I1026 02:20:56.713045   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:20:59.211741   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:01.713041   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:03.713999   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:06.212171   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:08.212292   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:10.212832   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:12.213756   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:14.711683   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:16.711769   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:18.712192   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:20.714206   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:23.211409   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:25.212766   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:27.712538   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:30.213972   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:32.712343   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:35.212266   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:37.712294   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:39.712378   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:42.211896   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:44.212804   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:46.712568   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:49.211905   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:51.212618   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:53.712161   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:55.713140   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:21:57.714672   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:00.212114   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:02.212796   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:04.212878   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:06.716498   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:09.211505   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:11.212929   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:13.712930   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:16.212285   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:18.213617   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:20.711664   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:22.712024   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:24.712306   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:26.713743   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:29.212832   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:31.713333   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:34.212845   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:36.715228   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:39.212046   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:41.212201   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:43.712136   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:45.712175   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:48.211846   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:50.711866   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:52.712012   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:55.211660   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:57.211816   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:22:59.713603   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:02.211786   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:04.213379   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:06.712333   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:09.211800   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:11.212369   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:13.711747   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:15.713243   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:18.212155   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:20.212562   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:22.712581   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:25.211687   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:27.712277   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:30.212893   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:32.712519   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:35.211568   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:37.212512   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:39.711265   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:41.712509   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:44.211120   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:46.212005   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:48.712867   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:51.213173   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:53.711667   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:55.712660   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:23:58.212058   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:00.712088   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:03.211648   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:05.212147   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:07.712719   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:10.211300   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:12.212043   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:14.712196   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:17.211511   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:19.212672   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:21.711887   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:23.712005   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:25.712224   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:28.211191   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:30.211948   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:32.212651   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:34.711438   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	I1026 02:24:36.714695   67066 pod_ready.go:103] pod "metrics-server-6867b74b74-jkl5g" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.914066755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909478914042894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3984f4c5-d328-4236-ab1a-9afb230694de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.914636766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e77072cf-ed83-4d42-89bb-80022cbdf4f2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.914699531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e77072cf-ed83-4d42-89bb-80022cbdf4f2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.914740284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e77072cf-ed83-4d42-89bb-80022cbdf4f2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.945332581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf664f2d-644c-4b3c-adeb-fd39aeb335fd name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.945427479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf664f2d-644c-4b3c-adeb-fd39aeb335fd name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.947129698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=417675b0-edd6-47d0-9a71-6a9212e24ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.947573334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909478947548068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=417675b0-edd6-47d0-9a71-6a9212e24ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.948090147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c1ea1b9-d021-437b-bb6e-db3353b86565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.948198491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c1ea1b9-d021-437b-bb6e-db3353b86565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.948279232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c1ea1b9-d021-437b-bb6e-db3353b86565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.979901485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a255b668-7a37-4a44-a24d-649bdbf45fed name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.979993126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a255b668-7a37-4a44-a24d-649bdbf45fed name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.981046493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bd2085d-5cf0-4306-a465-a2fac0d2e461 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.981513170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909478981489501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bd2085d-5cf0-4306-a465-a2fac0d2e461 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.982082521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e92aaddc-406b-4e7d-b504-3ee2689c9636 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.982221102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e92aaddc-406b-4e7d-b504-3ee2689c9636 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:38 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:38.982281843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e92aaddc-406b-4e7d-b504-3ee2689c9636 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.012427717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3014f28f-2694-4fad-9161-8f4354113206 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.012513506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3014f28f-2694-4fad-9161-8f4354113206 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.013832859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51338e4f-a316-4f69-b464-1ce00525dfc2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.014389169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909479014338796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51338e4f-a316-4f69-b464-1ce00525dfc2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.014955362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33fd2f46-f075-4bb5-82d6-8dbcb29d8ceb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.015003082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33fd2f46-f075-4bb5-82d6-8dbcb29d8ceb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:24:39 old-k8s-version-385716 crio[627]: time="2024-10-26 02:24:39.015045610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33fd2f46-f075-4bb5-82d6-8dbcb29d8ceb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct26 02:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050858] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872334] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.849137] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.223439] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.056856] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067296] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.170318] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.142616] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.248491] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.314889] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058322] kauditd_printk_skb: 130 callbacks suppressed
	[Oct26 02:05] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.983702] kauditd_printk_skb: 46 callbacks suppressed
	[Oct26 02:09] systemd-fstab-generator[5115]: Ignoring "noauto" option for root device
	[Oct26 02:11] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.069450] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:24:39 up 20 min,  0 users,  load average: 0.07, 0.03, 0.03
	Linux old-k8s-version-385716 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001c7ef0, 0x4f0ac20, 0xc0000503c0, 0x1, 0xc00009e0c0)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001640e0, 0xc00009e0c0)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ce0480, 0xc000ca1640)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: goroutine 166 [select]:
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000050780, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000208fc0, 0x0, 0x0)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000246a80)
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 26 02:24:38 old-k8s-version-385716 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 26 02:24:38 old-k8s-version-385716 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 02:24:38 old-k8s-version-385716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 02:24:39 old-k8s-version-385716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Oct 26 02:24:39 old-k8s-version-385716 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 02:24:39 old-k8s-version-385716 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 2 (213.171418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-385716" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:34:06.875522374 +0000 UTC m=+6661.643285549
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-661357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-661357 logs -n 25: (1.146699882s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-761631 sudo iptables                       | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo docker                         | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo find                           | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo crio                           | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-761631                                     | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:28:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:28:56.856159   79140 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:28:56.856276   79140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:28:56.856286   79140 out.go:358] Setting ErrFile to fd 2...
	I1026 02:28:56.856291   79140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:28:56.856467   79140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:28:56.857047   79140 out.go:352] Setting JSON to false
	I1026 02:28:56.858155   79140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7877,"bootTime":1729901860,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:28:56.858244   79140 start.go:139] virtualization: kvm guest
	I1026 02:28:56.860342   79140 out.go:177] * [bridge-761631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:28:56.861753   79140 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:28:56.861769   79140 notify.go:220] Checking for updates...
	I1026 02:28:56.864120   79140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:28:56.865457   79140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:28:56.866728   79140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:28:56.867918   79140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:28:56.869121   79140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:28:56.870974   79140 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871113   79140 config.go:182] Loaded profile config "enable-default-cni-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871248   79140 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871360   79140 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:28:56.907046   79140 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 02:28:56.908208   79140 start.go:297] selected driver: kvm2
	I1026 02:28:56.908219   79140 start.go:901] validating driver "kvm2" against <nil>
	I1026 02:28:56.908230   79140 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:28:56.908882   79140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:28:56.908979   79140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:28:56.924645   79140 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:28:56.924692   79140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 02:28:56.924969   79140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:28:56.924998   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:28:56.925003   79140 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 02:28:56.925054   79140 start.go:340] cluster config:
	{Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:28:56.925193   79140 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:28:56.926707   79140 out.go:177] * Starting "bridge-761631" primary control-plane node in "bridge-761631" cluster
	I1026 02:28:59.052672   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.053208   77486 main.go:141] libmachine: (flannel-761631) Found IP for machine: 192.168.61.248
	I1026 02:28:59.053231   77486 main.go:141] libmachine: (flannel-761631) Reserving static IP address...
	I1026 02:28:59.053241   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has current primary IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.053610   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find host DHCP lease matching {name: "flannel-761631", mac: "52:54:00:e1:ad:74", ip: "192.168.61.248"} in network mk-flannel-761631
	I1026 02:28:59.135986   77486 main.go:141] libmachine: (flannel-761631) DBG | Getting to WaitForSSH function...
	I1026 02:28:59.136019   77486 main.go:141] libmachine: (flannel-761631) Reserved static IP address: 192.168.61.248
	I1026 02:28:59.136034   77486 main.go:141] libmachine: (flannel-761631) Waiting for SSH to be available...
	I1026 02:28:59.138641   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.138894   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631
	I1026 02:28:59.138920   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find defined IP address of network mk-flannel-761631 interface with MAC address 52:54:00:e1:ad:74
	I1026 02:28:59.139100   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH client type: external
	I1026 02:28:59.139127   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa (-rw-------)
	I1026 02:28:59.139154   77486 main.go:141] libmachine: (flannel-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:28:59.139167   77486 main.go:141] libmachine: (flannel-761631) DBG | About to run SSH command:
	I1026 02:28:59.139179   77486 main.go:141] libmachine: (flannel-761631) DBG | exit 0
	I1026 02:28:59.143017   77486 main.go:141] libmachine: (flannel-761631) DBG | SSH cmd err, output: exit status 255: 
	I1026 02:28:59.143034   77486 main.go:141] libmachine: (flannel-761631) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1026 02:28:59.143041   77486 main.go:141] libmachine: (flannel-761631) DBG | command : exit 0
	I1026 02:28:59.143045   77486 main.go:141] libmachine: (flannel-761631) DBG | err     : exit status 255
	I1026 02:28:59.143052   77486 main.go:141] libmachine: (flannel-761631) DBG | output  : 
	I1026 02:28:56.927977   79140 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:28:56.928021   79140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:28:56.928034   79140 cache.go:56] Caching tarball of preloaded images
	I1026 02:28:56.928130   79140 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:28:56.928144   79140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:28:56.928270   79140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json ...
	I1026 02:28:56.928300   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json: {Name:mk0ea3c89d6ff01c0e3a98a985d381e9c11db97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:28:56.928473   79140 start.go:360] acquireMachinesLock for bridge-761631: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:29:03.538004   79140 start.go:364] duration metric: took 6.609465722s to acquireMachinesLock for "bridge-761631"
	I1026 02:29:03.538075   79140 start.go:93] Provisioning new machine with config: &{Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:03.538201   79140 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 02:29:02.143351   77486 main.go:141] libmachine: (flannel-761631) DBG | Getting to WaitForSSH function...
	I1026 02:29:02.145717   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.146065   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.146093   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.146230   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH client type: external
	I1026 02:29:02.146251   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa (-rw-------)
	I1026 02:29:02.146279   77486 main.go:141] libmachine: (flannel-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:29:02.146293   77486 main.go:141] libmachine: (flannel-761631) DBG | About to run SSH command:
	I1026 02:29:02.146311   77486 main.go:141] libmachine: (flannel-761631) DBG | exit 0
	I1026 02:29:02.273577   77486 main.go:141] libmachine: (flannel-761631) DBG | SSH cmd err, output: <nil>: 
	I1026 02:29:02.273798   77486 main.go:141] libmachine: (flannel-761631) KVM machine creation complete!
	I1026 02:29:02.274194   77486 main.go:141] libmachine: (flannel-761631) Calling .GetConfigRaw
	I1026 02:29:02.274821   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:02.274998   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:02.275168   77486 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:29:02.275185   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:02.276503   77486 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:29:02.276515   77486 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:29:02.276520   77486 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:29:02.276525   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.278979   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.279313   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.279349   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.279448   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.279592   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.279736   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.279847   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.280010   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.280224   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.280236   77486 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:29:02.384544   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:02.384569   77486 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:29:02.384579   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.387347   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.387757   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.387784   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.387993   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.388185   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.388319   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.388442   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.388649   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.388862   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.388877   77486 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:29:02.493993   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:29:02.494104   77486 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:29:02.494119   77486 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:29:02.494132   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.494363   77486 buildroot.go:166] provisioning hostname "flannel-761631"
	I1026 02:29:02.494387   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.494578   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.496840   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.497245   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.497280   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.497392   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.497573   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.497695   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.497840   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.498023   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.498238   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.498255   77486 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-761631 && echo "flannel-761631" | sudo tee /etc/hostname
	I1026 02:29:02.612400   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-761631
	
	I1026 02:29:02.612426   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.615521   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.615929   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.615962   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.616178   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.616337   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.616487   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.616591   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.616741   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.616965   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.616992   77486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-761631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-761631/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-761631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:29:02.734935   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:02.734970   77486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:29:02.734993   77486 buildroot.go:174] setting up certificates
	I1026 02:29:02.735002   77486 provision.go:84] configureAuth start
	I1026 02:29:02.735013   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.735299   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:02.738345   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.738760   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.738787   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.739045   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.741283   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.741629   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.741657   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.741780   77486 provision.go:143] copyHostCerts
	I1026 02:29:02.741839   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:29:02.741859   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:29:02.741964   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:29:02.742085   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:29:02.742093   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:29:02.742126   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:29:02.742226   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:29:02.742234   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:29:02.742257   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:29:02.742320   77486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.flannel-761631 san=[127.0.0.1 192.168.61.248 flannel-761631 localhost minikube]
	I1026 02:29:02.913157   77486 provision.go:177] copyRemoteCerts
	I1026 02:29:02.913219   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:29:02.913243   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.916026   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.916413   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.916444   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.916681   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.916851   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.917045   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.917183   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.005367   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:29:03.030552   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1026 02:29:03.053798   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:29:03.082304   77486 provision.go:87] duration metric: took 347.290274ms to configureAuth
	I1026 02:29:03.082332   77486 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:29:03.082523   77486 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:03.082627   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.085726   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.086074   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.086112   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.086311   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.086514   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.086717   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.086862   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.087074   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:03.087297   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:03.087323   77486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:29:03.299261   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:29:03.299299   77486 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:29:03.299311   77486 main.go:141] libmachine: (flannel-761631) Calling .GetURL
	I1026 02:29:03.300717   77486 main.go:141] libmachine: (flannel-761631) DBG | Using libvirt version 6000000
	I1026 02:29:03.303302   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.303673   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.303718   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.303895   77486 main.go:141] libmachine: Docker is up and running!
	I1026 02:29:03.303909   77486 main.go:141] libmachine: Reticulating splines...
	I1026 02:29:03.303915   77486 client.go:171] duration metric: took 26.411935173s to LocalClient.Create
	I1026 02:29:03.303937   77486 start.go:167] duration metric: took 26.412005141s to libmachine.API.Create "flannel-761631"
	I1026 02:29:03.303946   77486 start.go:293] postStartSetup for "flannel-761631" (driver="kvm2")
	I1026 02:29:03.303965   77486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:29:03.303988   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.304217   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:29:03.304244   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.306504   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.306863   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.306891   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.307064   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.307246   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.307391   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.307554   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.388055   77486 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:29:03.392300   77486 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:29:03.392325   77486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:29:03.392386   77486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:29:03.392456   77486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:29:03.392538   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:29:03.401915   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:03.424503   77486 start.go:296] duration metric: took 120.523915ms for postStartSetup
	I1026 02:29:03.424551   77486 main.go:141] libmachine: (flannel-761631) Calling .GetConfigRaw
	I1026 02:29:03.425142   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:03.427498   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.427795   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.427818   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.428058   77486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/config.json ...
	I1026 02:29:03.428238   77486 start.go:128] duration metric: took 26.556076835s to createHost
	I1026 02:29:03.428265   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.430133   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.430448   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.430488   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.430639   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.430812   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.430944   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.431065   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.431257   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:03.431451   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:03.431464   77486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:29:03.537852   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909743.512612811
	
	I1026 02:29:03.537874   77486 fix.go:216] guest clock: 1729909743.512612811
	I1026 02:29:03.537881   77486 fix.go:229] Guest: 2024-10-26 02:29:03.512612811 +0000 UTC Remote: 2024-10-26 02:29:03.428253389 +0000 UTC m=+26.667896241 (delta=84.359422ms)
	I1026 02:29:03.537900   77486 fix.go:200] guest clock delta is within tolerance: 84.359422ms
	I1026 02:29:03.537905   77486 start.go:83] releasing machines lock for "flannel-761631", held for 26.665803633s
	I1026 02:29:03.537930   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.538175   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:03.540690   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.541080   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.541108   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.541252   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.541794   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.541985   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.542087   77486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:29:03.542120   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.542393   77486 ssh_runner.go:195] Run: cat /version.json
	I1026 02:29:03.542416   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.544843   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545212   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.545245   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545308   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545566   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.545733   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.545790   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.545819   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545900   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.545961   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.546043   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.546074   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.546201   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.546317   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.655118   77486 ssh_runner.go:195] Run: systemctl --version
	I1026 02:29:03.661097   77486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:29:03.820803   77486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:29:03.827693   77486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:29:03.827765   77486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:29:03.843988   77486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:29:03.844011   77486 start.go:495] detecting cgroup driver to use...
	I1026 02:29:03.844082   77486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:29:03.860998   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:29:03.875158   77486 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:29:03.875218   77486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:29:03.888848   77486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:29:03.902570   77486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:29:04.031377   77486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:29:04.184250   77486 docker.go:233] disabling docker service ...
	I1026 02:29:04.184302   77486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:29:04.200026   77486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:29:04.212863   77486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:29:04.369442   77486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:29:04.485151   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:29:04.499036   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:29:04.518134   77486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:29:04.518202   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.527866   77486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:29:04.527960   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.538314   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.548172   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.558277   77486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:29:04.568312   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.578600   77486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.594896   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.605167   77486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:29:04.615577   77486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:29:04.615634   77486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:29:04.628647   77486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:29:04.639122   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:04.796937   77486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:29:04.900700   77486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:29:04.900770   77486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:29:04.905538   77486 start.go:563] Will wait 60s for crictl version
	I1026 02:29:04.905580   77486 ssh_runner.go:195] Run: which crictl
	I1026 02:29:04.908908   77486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:29:04.947058   77486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:29:04.947158   77486 ssh_runner.go:195] Run: crio --version
	I1026 02:29:04.983443   77486 ssh_runner.go:195] Run: crio --version
	I1026 02:29:05.014057   77486 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:29:05.015316   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:05.022973   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:05.023624   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:05.023653   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:05.023903   77486 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 02:29:05.031431   77486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:05.044361   77486 kubeadm.go:883] updating cluster {Name:flannel-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:29:05.044522   77486 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:29:05.044596   77486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:05.086779   77486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:29:05.086837   77486 ssh_runner.go:195] Run: which lz4
	I1026 02:29:05.090929   77486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:29:05.095066   77486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:29:05.095099   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:29:06.384582   77486 crio.go:462] duration metric: took 1.293706653s to copy over tarball
	I1026 02:29:06.384669   77486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:29:03.540342   79140 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1026 02:29:03.540543   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:03.540599   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:03.557622   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1026 02:29:03.558133   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:03.558727   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:03.558774   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:03.559132   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:03.559317   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:03.559462   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:03.559634   79140 start.go:159] libmachine.API.Create for "bridge-761631" (driver="kvm2")
	I1026 02:29:03.559665   79140 client.go:168] LocalClient.Create starting
	I1026 02:29:03.559703   79140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 02:29:03.559745   79140 main.go:141] libmachine: Decoding PEM data...
	I1026 02:29:03.559763   79140 main.go:141] libmachine: Parsing certificate...
	I1026 02:29:03.559842   79140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 02:29:03.559872   79140 main.go:141] libmachine: Decoding PEM data...
	I1026 02:29:03.559888   79140 main.go:141] libmachine: Parsing certificate...
	I1026 02:29:03.559917   79140 main.go:141] libmachine: Running pre-create checks...
	I1026 02:29:03.559929   79140 main.go:141] libmachine: (bridge-761631) Calling .PreCreateCheck
	I1026 02:29:03.560291   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:03.560740   79140 main.go:141] libmachine: Creating machine...
	I1026 02:29:03.560757   79140 main.go:141] libmachine: (bridge-761631) Calling .Create
	I1026 02:29:03.560908   79140 main.go:141] libmachine: (bridge-761631) Creating KVM machine...
	I1026 02:29:03.562333   79140 main.go:141] libmachine: (bridge-761631) DBG | found existing default KVM network
	I1026 02:29:03.563829   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.563645   79257 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:59:05} reservation:<nil>}
	I1026 02:29:03.565313   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.565241   79257 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034c0e0}
	I1026 02:29:03.565379   79140 main.go:141] libmachine: (bridge-761631) DBG | created network xml: 
	I1026 02:29:03.565394   79140 main.go:141] libmachine: (bridge-761631) DBG | <network>
	I1026 02:29:03.565404   79140 main.go:141] libmachine: (bridge-761631) DBG |   <name>mk-bridge-761631</name>
	I1026 02:29:03.565433   79140 main.go:141] libmachine: (bridge-761631) DBG |   <dns enable='no'/>
	I1026 02:29:03.565443   79140 main.go:141] libmachine: (bridge-761631) DBG |   
	I1026 02:29:03.565453   79140 main.go:141] libmachine: (bridge-761631) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1026 02:29:03.565465   79140 main.go:141] libmachine: (bridge-761631) DBG |     <dhcp>
	I1026 02:29:03.565487   79140 main.go:141] libmachine: (bridge-761631) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1026 02:29:03.565502   79140 main.go:141] libmachine: (bridge-761631) DBG |     </dhcp>
	I1026 02:29:03.565512   79140 main.go:141] libmachine: (bridge-761631) DBG |   </ip>
	I1026 02:29:03.565520   79140 main.go:141] libmachine: (bridge-761631) DBG |   
	I1026 02:29:03.565529   79140 main.go:141] libmachine: (bridge-761631) DBG | </network>
	I1026 02:29:03.565538   79140 main.go:141] libmachine: (bridge-761631) DBG | 
	I1026 02:29:03.571055   79140 main.go:141] libmachine: (bridge-761631) DBG | trying to create private KVM network mk-bridge-761631 192.168.50.0/24...
	I1026 02:29:03.641834   79140 main.go:141] libmachine: (bridge-761631) DBG | private KVM network mk-bridge-761631 192.168.50.0/24 created
	I1026 02:29:03.641864   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.641748   79257 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:29:03.641875   79140 main.go:141] libmachine: (bridge-761631) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 ...
	I1026 02:29:03.641899   79140 main.go:141] libmachine: (bridge-761631) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 02:29:03.641928   79140 main.go:141] libmachine: (bridge-761631) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 02:29:03.893026   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.892917   79257 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa...
	I1026 02:29:03.982799   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.982663   79257 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/bridge-761631.rawdisk...
	I1026 02:29:03.982835   79140 main.go:141] libmachine: (bridge-761631) DBG | Writing magic tar header
	I1026 02:29:03.982849   79140 main.go:141] libmachine: (bridge-761631) DBG | Writing SSH key tar header
	I1026 02:29:03.982862   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.982781   79257 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 ...
	I1026 02:29:03.982939   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631
	I1026 02:29:03.982974   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 02:29:03.982990   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 (perms=drwx------)
	I1026 02:29:03.983010   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 02:29:03.983023   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 02:29:03.983035   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 02:29:03.983048   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:29:03.983058   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 02:29:03.983070   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 02:29:03.983081   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 02:29:03.983095   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 02:29:03.983109   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins
	I1026 02:29:03.983120   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home
	I1026 02:29:03.983133   79140 main.go:141] libmachine: (bridge-761631) DBG | Skipping /home - not owner
	I1026 02:29:03.983148   79140 main.go:141] libmachine: (bridge-761631) Creating domain...
	I1026 02:29:03.984253   79140 main.go:141] libmachine: (bridge-761631) define libvirt domain using xml: 
	I1026 02:29:03.984280   79140 main.go:141] libmachine: (bridge-761631) <domain type='kvm'>
	I1026 02:29:03.984290   79140 main.go:141] libmachine: (bridge-761631)   <name>bridge-761631</name>
	I1026 02:29:03.984301   79140 main.go:141] libmachine: (bridge-761631)   <memory unit='MiB'>3072</memory>
	I1026 02:29:03.984310   79140 main.go:141] libmachine: (bridge-761631)   <vcpu>2</vcpu>
	I1026 02:29:03.984314   79140 main.go:141] libmachine: (bridge-761631)   <features>
	I1026 02:29:03.984319   79140 main.go:141] libmachine: (bridge-761631)     <acpi/>
	I1026 02:29:03.984324   79140 main.go:141] libmachine: (bridge-761631)     <apic/>
	I1026 02:29:03.984330   79140 main.go:141] libmachine: (bridge-761631)     <pae/>
	I1026 02:29:03.984336   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984341   79140 main.go:141] libmachine: (bridge-761631)   </features>
	I1026 02:29:03.984351   79140 main.go:141] libmachine: (bridge-761631)   <cpu mode='host-passthrough'>
	I1026 02:29:03.984385   79140 main.go:141] libmachine: (bridge-761631)   
	I1026 02:29:03.984403   79140 main.go:141] libmachine: (bridge-761631)   </cpu>
	I1026 02:29:03.984430   79140 main.go:141] libmachine: (bridge-761631)   <os>
	I1026 02:29:03.984451   79140 main.go:141] libmachine: (bridge-761631)     <type>hvm</type>
	I1026 02:29:03.984464   79140 main.go:141] libmachine: (bridge-761631)     <boot dev='cdrom'/>
	I1026 02:29:03.984475   79140 main.go:141] libmachine: (bridge-761631)     <boot dev='hd'/>
	I1026 02:29:03.984486   79140 main.go:141] libmachine: (bridge-761631)     <bootmenu enable='no'/>
	I1026 02:29:03.984509   79140 main.go:141] libmachine: (bridge-761631)   </os>
	I1026 02:29:03.984518   79140 main.go:141] libmachine: (bridge-761631)   <devices>
	I1026 02:29:03.984530   79140 main.go:141] libmachine: (bridge-761631)     <disk type='file' device='cdrom'>
	I1026 02:29:03.984546   79140 main.go:141] libmachine: (bridge-761631)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/boot2docker.iso'/>
	I1026 02:29:03.984558   79140 main.go:141] libmachine: (bridge-761631)       <target dev='hdc' bus='scsi'/>
	I1026 02:29:03.984569   79140 main.go:141] libmachine: (bridge-761631)       <readonly/>
	I1026 02:29:03.984580   79140 main.go:141] libmachine: (bridge-761631)     </disk>
	I1026 02:29:03.984588   79140 main.go:141] libmachine: (bridge-761631)     <disk type='file' device='disk'>
	I1026 02:29:03.984608   79140 main.go:141] libmachine: (bridge-761631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 02:29:03.984626   79140 main.go:141] libmachine: (bridge-761631)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/bridge-761631.rawdisk'/>
	I1026 02:29:03.984639   79140 main.go:141] libmachine: (bridge-761631)       <target dev='hda' bus='virtio'/>
	I1026 02:29:03.984649   79140 main.go:141] libmachine: (bridge-761631)     </disk>
	I1026 02:29:03.984659   79140 main.go:141] libmachine: (bridge-761631)     <interface type='network'>
	I1026 02:29:03.984677   79140 main.go:141] libmachine: (bridge-761631)       <source network='mk-bridge-761631'/>
	I1026 02:29:03.984686   79140 main.go:141] libmachine: (bridge-761631)       <model type='virtio'/>
	I1026 02:29:03.984707   79140 main.go:141] libmachine: (bridge-761631)     </interface>
	I1026 02:29:03.984723   79140 main.go:141] libmachine: (bridge-761631)     <interface type='network'>
	I1026 02:29:03.984734   79140 main.go:141] libmachine: (bridge-761631)       <source network='default'/>
	I1026 02:29:03.984746   79140 main.go:141] libmachine: (bridge-761631)       <model type='virtio'/>
	I1026 02:29:03.984759   79140 main.go:141] libmachine: (bridge-761631)     </interface>
	I1026 02:29:03.984771   79140 main.go:141] libmachine: (bridge-761631)     <serial type='pty'>
	I1026 02:29:03.984782   79140 main.go:141] libmachine: (bridge-761631)       <target port='0'/>
	I1026 02:29:03.984793   79140 main.go:141] libmachine: (bridge-761631)     </serial>
	I1026 02:29:03.984803   79140 main.go:141] libmachine: (bridge-761631)     <console type='pty'>
	I1026 02:29:03.984814   79140 main.go:141] libmachine: (bridge-761631)       <target type='serial' port='0'/>
	I1026 02:29:03.984823   79140 main.go:141] libmachine: (bridge-761631)     </console>
	I1026 02:29:03.984848   79140 main.go:141] libmachine: (bridge-761631)     <rng model='virtio'>
	I1026 02:29:03.984866   79140 main.go:141] libmachine: (bridge-761631)       <backend model='random'>/dev/random</backend>
	I1026 02:29:03.984879   79140 main.go:141] libmachine: (bridge-761631)     </rng>
	I1026 02:29:03.984888   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984897   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984907   79140 main.go:141] libmachine: (bridge-761631)   </devices>
	I1026 02:29:03.984919   79140 main.go:141] libmachine: (bridge-761631) </domain>
	I1026 02:29:03.984925   79140 main.go:141] libmachine: (bridge-761631) 
	I1026 02:29:03.990144   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:f8:6c:2f in network default
	I1026 02:29:03.990722   79140 main.go:141] libmachine: (bridge-761631) Ensuring networks are active...
	I1026 02:29:03.990771   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:03.991439   79140 main.go:141] libmachine: (bridge-761631) Ensuring network default is active
	I1026 02:29:03.991787   79140 main.go:141] libmachine: (bridge-761631) Ensuring network mk-bridge-761631 is active
	I1026 02:29:03.992334   79140 main.go:141] libmachine: (bridge-761631) Getting domain xml...
	I1026 02:29:03.993137   79140 main.go:141] libmachine: (bridge-761631) Creating domain...
	I1026 02:29:05.398126   79140 main.go:141] libmachine: (bridge-761631) Waiting to get IP...
	I1026 02:29:05.399080   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.399572   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.399599   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.399546   79257 retry.go:31] will retry after 209.544491ms: waiting for machine to come up
	I1026 02:29:05.611703   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.614223   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.614254   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.614128   79257 retry.go:31] will retry after 236.803159ms: waiting for machine to come up
	I1026 02:29:05.852793   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.853468   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.853493   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.853338   79257 retry.go:31] will retry after 403.786232ms: waiting for machine to come up
	I1026 02:29:06.259139   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:06.259801   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:06.259825   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:06.259763   79257 retry.go:31] will retry after 468.969978ms: waiting for machine to come up
	I1026 02:29:06.730685   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:06.731406   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:06.731439   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:06.731358   79257 retry.go:31] will retry after 592.815717ms: waiting for machine to come up
	I1026 02:29:08.766592   77486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3818896s)
	I1026 02:29:08.766622   77486 crio.go:469] duration metric: took 2.382010529s to extract the tarball
	I1026 02:29:08.766632   77486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:29:08.806348   77486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:08.854169   77486 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:29:08.854198   77486 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:29:08.854208   77486 kubeadm.go:934] updating node { 192.168.61.248 8443 v1.31.2 crio true true} ...
	I1026 02:29:08.854321   77486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-761631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1026 02:29:08.854406   77486 ssh_runner.go:195] Run: crio config
	I1026 02:29:08.916328   77486 cni.go:84] Creating CNI manager for "flannel"
	I1026 02:29:08.916352   77486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:29:08.916375   77486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.248 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-761631 NodeName:flannel-761631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:29:08.916526   77486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-761631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.248"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.248"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:29:08.916582   77486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:29:08.926749   77486 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:29:08.926808   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:29:08.935370   77486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1026 02:29:08.960660   77486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:29:08.976856   77486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1026 02:29:08.993075   77486 ssh_runner.go:195] Run: grep 192.168.61.248	control-plane.minikube.internal$ /etc/hosts
	I1026 02:29:08.996750   77486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:09.009318   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:09.152886   77486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:09.172962   77486 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631 for IP: 192.168.61.248
	I1026 02:29:09.172984   77486 certs.go:194] generating shared ca certs ...
	I1026 02:29:09.173004   77486 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.173163   77486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:29:09.173221   77486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:29:09.173233   77486 certs.go:256] generating profile certs ...
	I1026 02:29:09.173299   77486 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key
	I1026 02:29:09.173315   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt with IP's: []
	I1026 02:29:09.340952   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt ...
	I1026 02:29:09.340985   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: {Name:mk60fd82ad62306bfc219fc9d355b470e6d5fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.341321   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key ...
	I1026 02:29:09.341346   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key: {Name:mkd499e20bc992f3b2dc2fb5764fdc851cf3ca5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.342116   77486 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253
	I1026 02:29:09.342141   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.248]
	I1026 02:29:09.413853   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 ...
	I1026 02:29:09.413879   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253: {Name:mk583e07ca25dcda4e47e41be43863a944cbb66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.414033   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253 ...
	I1026 02:29:09.414048   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253: {Name:mk94450c712e1ffcb37d68122bde08f681bf9f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.414142   77486 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt
	I1026 02:29:09.414248   77486 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key
	I1026 02:29:09.414330   77486 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key
	I1026 02:29:09.414349   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt with IP's: []
	I1026 02:29:09.552525   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt ...
	I1026 02:29:09.552552   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt: {Name:mk35e307ae13de04b93087b17c0414e37720490b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.552739   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key ...
	I1026 02:29:09.552752   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key: {Name:mk2d7c24cb98b88b8f1e364eed062e2b83bf86cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.552963   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:29:09.553004   77486 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:29:09.553015   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:29:09.553036   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:29:09.553059   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:29:09.553081   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:29:09.553119   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:09.553774   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:29:09.580676   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:29:09.604867   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:29:09.631527   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:29:09.662390   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 02:29:09.689868   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:29:09.717846   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:29:09.746298   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:29:09.772659   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:29:09.797712   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:29:09.823043   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:29:09.854285   77486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:29:09.878022   77486 ssh_runner.go:195] Run: openssl version
	I1026 02:29:09.885068   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:29:09.903443   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.910261   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.910309   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.918367   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:29:09.937816   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:29:09.949781   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.954565   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.954631   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.960651   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:29:09.971689   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:29:09.982432   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.986796   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.986861   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.992646   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:29:10.003297   77486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:29:10.007036   77486 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 02:29:10.007084   77486 kubeadm.go:392] StartCluster: {Name:flannel-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:29:10.007146   77486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:29:10.007183   77486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:29:10.051824   77486 cri.go:89] found id: ""
	I1026 02:29:10.051920   77486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:29:10.062287   77486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:29:10.075721   77486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:29:10.089939   77486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:29:10.089966   77486 kubeadm.go:157] found existing configuration files:
	
	I1026 02:29:10.090018   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:29:10.100358   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:29:10.100422   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:29:10.112799   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:29:10.124510   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:29:10.124578   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:29:10.137012   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:29:10.148800   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:29:10.148854   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:29:10.161156   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:29:10.171960   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:29:10.172035   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:29:10.183113   77486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:29:10.238771   77486 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:29:10.238873   77486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:29:10.363978   77486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:29:10.364113   77486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:29:10.364233   77486 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:29:10.375354   77486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:29:10.453508   77486 out.go:235]   - Generating certificates and keys ...
	I1026 02:29:10.453626   77486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:29:10.453712   77486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:29:10.505380   77486 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 02:29:10.607383   77486 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 02:29:10.985093   77486 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 02:29:11.090154   77486 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 02:29:11.220927   77486 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 02:29:11.221130   77486 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-761631 localhost] and IPs [192.168.61.248 127.0.0.1 ::1]
	I1026 02:29:11.561401   77486 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 02:29:11.561650   77486 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-761631 localhost] and IPs [192.168.61.248 127.0.0.1 ::1]
	I1026 02:29:11.633523   77486 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 02:29:11.784291   77486 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 02:29:07.326305   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:07.327055   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:07.327086   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:07.327020   79257 retry.go:31] will retry after 588.834605ms: waiting for machine to come up
	I1026 02:29:07.917851   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:07.918379   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:07.918408   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:07.918325   79257 retry.go:31] will retry after 853.665263ms: waiting for machine to come up
	I1026 02:29:08.773683   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:08.774257   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:08.774284   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:08.774213   79257 retry.go:31] will retry after 1.370060539s: waiting for machine to come up
	I1026 02:29:10.146643   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:10.147145   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:10.147173   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:10.147109   79257 retry.go:31] will retry after 1.521712642s: waiting for machine to come up
	I1026 02:29:11.670458   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:11.670928   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:11.670955   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:11.670888   79257 retry.go:31] will retry after 1.580274021s: waiting for machine to come up
	I1026 02:29:12.001325   77486 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 02:29:12.001578   77486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:29:12.143567   77486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:29:12.239025   77486 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:29:12.472872   77486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:29:12.906638   77486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:29:13.057013   77486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:29:13.057808   77486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:29:13.060204   77486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:29:13.062029   77486 out.go:235]   - Booting up control plane ...
	I1026 02:29:13.062147   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:29:13.062254   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:29:13.062364   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:29:13.087233   77486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:29:13.094292   77486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:29:13.094459   77486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:29:13.262679   77486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:29:13.262844   77486 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:29:13.764852   77486 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.822984ms
	I1026 02:29:13.764970   77486 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:29:13.252346   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:13.252785   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:13.252813   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:13.252741   79257 retry.go:31] will retry after 2.501165629s: waiting for machine to come up
	I1026 02:29:15.755812   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:15.756157   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:15.756178   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:15.756105   79257 retry.go:31] will retry after 3.067156454s: waiting for machine to come up
	I1026 02:29:19.262401   77486 kubeadm.go:310] [api-check] The API server is healthy after 5.501282988s
	I1026 02:29:19.276398   77486 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 02:29:19.294112   77486 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 02:29:19.320353   77486 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 02:29:19.320636   77486 kubeadm.go:310] [mark-control-plane] Marking the node flannel-761631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 02:29:19.331623   77486 kubeadm.go:310] [bootstrap-token] Using token: igjkpp.i376pagzjlp08yff
	I1026 02:29:19.332835   77486 out.go:235]   - Configuring RBAC rules ...
	I1026 02:29:19.332973   77486 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 02:29:19.339298   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 02:29:19.348141   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 02:29:19.352106   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 02:29:19.361610   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 02:29:19.366790   77486 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 02:29:19.669645   77486 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 02:29:20.102403   77486 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 02:29:20.669559   77486 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 02:29:20.671428   77486 kubeadm.go:310] 
	I1026 02:29:20.671493   77486 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 02:29:20.671500   77486 kubeadm.go:310] 
	I1026 02:29:20.671583   77486 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 02:29:20.671594   77486 kubeadm.go:310] 
	I1026 02:29:20.671619   77486 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 02:29:20.671676   77486 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 02:29:20.671744   77486 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 02:29:20.671753   77486 kubeadm.go:310] 
	I1026 02:29:20.671798   77486 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 02:29:20.671804   77486 kubeadm.go:310] 
	I1026 02:29:20.671843   77486 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 02:29:20.671850   77486 kubeadm.go:310] 
	I1026 02:29:20.671892   77486 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 02:29:20.672016   77486 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 02:29:20.672105   77486 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 02:29:20.672132   77486 kubeadm.go:310] 
	I1026 02:29:20.672250   77486 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 02:29:20.672359   77486 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 02:29:20.672370   77486 kubeadm.go:310] 
	I1026 02:29:20.672460   77486 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token igjkpp.i376pagzjlp08yff \
	I1026 02:29:20.672568   77486 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 02:29:20.672611   77486 kubeadm.go:310] 	--control-plane 
	I1026 02:29:20.672622   77486 kubeadm.go:310] 
	I1026 02:29:20.672737   77486 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 02:29:20.672746   77486 kubeadm.go:310] 
	I1026 02:29:20.672843   77486 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token igjkpp.i376pagzjlp08yff \
	I1026 02:29:20.672961   77486 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 02:29:20.673835   77486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:29:20.673856   77486 cni.go:84] Creating CNI manager for "flannel"
	I1026 02:29:20.675502   77486 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1026 02:29:20.676788   77486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 02:29:20.683988   77486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 02:29:20.684010   77486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1026 02:29:20.700524   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 02:29:21.081090   77486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:29:21.081142   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:21.081181   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-761631 minikube.k8s.io/updated_at=2024_10_26T02_29_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=flannel-761631 minikube.k8s.io/primary=true
	I1026 02:29:21.118686   77486 ops.go:34] apiserver oom_adj: -16
	I1026 02:29:21.257435   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:21.758243   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:18.825123   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:18.825724   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:18.825750   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:18.825657   79257 retry.go:31] will retry after 3.727894276s: waiting for machine to come up
	I1026 02:29:22.258398   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:22.757544   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:23.258231   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:23.758129   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:24.258255   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:24.339513   77486 kubeadm.go:1113] duration metric: took 3.258424764s to wait for elevateKubeSystemPrivileges
	I1026 02:29:24.339545   77486 kubeadm.go:394] duration metric: took 14.332464563s to StartCluster
	I1026 02:29:24.339561   77486 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:24.339635   77486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:29:24.340556   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:24.340779   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 02:29:24.340778   77486 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:24.340801   77486 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:29:24.340885   77486 addons.go:69] Setting storage-provisioner=true in profile "flannel-761631"
	I1026 02:29:24.340957   77486 addons.go:234] Setting addon storage-provisioner=true in "flannel-761631"
	I1026 02:29:24.340981   77486 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:24.340992   77486 host.go:66] Checking if "flannel-761631" exists ...
	I1026 02:29:24.340898   77486 addons.go:69] Setting default-storageclass=true in profile "flannel-761631"
	I1026 02:29:24.341042   77486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-761631"
	I1026 02:29:24.341453   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.341474   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.341495   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.341510   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.342409   77486 out.go:177] * Verifying Kubernetes components...
	I1026 02:29:24.343888   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:24.356991   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I1026 02:29:24.357257   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I1026 02:29:24.357505   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.357709   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.358102   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.358128   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.358220   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.358237   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.358485   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.358522   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.358661   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.358997   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.359041   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.361873   77486 addons.go:234] Setting addon default-storageclass=true in "flannel-761631"
	I1026 02:29:24.361912   77486 host.go:66] Checking if "flannel-761631" exists ...
	I1026 02:29:24.362258   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.362296   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.373806   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32855
	I1026 02:29:24.374321   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.374881   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.374918   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.375198   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.375389   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.377236   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:24.378531   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I1026 02:29:24.378972   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.378979   77486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:29:24.379404   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.379426   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.379718   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.380127   77486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:24.380134   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.380142   77486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:29:24.380193   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.380263   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:24.383226   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.383664   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:24.383691   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.383955   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:24.384123   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:24.384287   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:24.384398   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:24.395245   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I1026 02:29:24.395646   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.396110   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.396129   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.396426   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.396588   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.397917   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:24.398103   77486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:24.398119   77486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:29:24.398138   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:24.400434   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.400852   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:24.400877   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.401040   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:24.401224   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:24.401361   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:24.401496   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:24.549354   77486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:24.549551   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 02:29:24.566375   77486 node_ready.go:35] waiting up to 15m0s for node "flannel-761631" to be "Ready" ...
	I1026 02:29:24.656359   77486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:24.711281   77486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:25.005762   77486 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1026 02:29:25.455191   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455260   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455226   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455352   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455623   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455636   77486 main.go:141] libmachine: (flannel-761631) DBG | Closing plugin on server side
	I1026 02:29:25.455640   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.455656   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455675   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455679   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.455686   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455691   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455701   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455902   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455925   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.456009   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.456022   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.456059   77486 main.go:141] libmachine: (flannel-761631) DBG | Closing plugin on server side
	I1026 02:29:25.466730   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.466753   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.467021   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.467037   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.468528   77486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 02:29:25.469503   77486 addons.go:510] duration metric: took 1.128704492s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 02:29:25.510240   77486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-761631" context rescaled to 1 replicas
	I1026 02:29:26.569270   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:22.554616   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:22.555173   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:22.555199   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:22.555136   79257 retry.go:31] will retry after 5.242559388s: waiting for machine to come up
	I1026 02:29:27.799416   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.799964   79140 main.go:141] libmachine: (bridge-761631) Found IP for machine: 192.168.50.234
	I1026 02:29:27.799998   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has current primary IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.800007   79140 main.go:141] libmachine: (bridge-761631) Reserving static IP address...
	I1026 02:29:27.800402   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find host DHCP lease matching {name: "bridge-761631", mac: "52:54:00:62:c2:12", ip: "192.168.50.234"} in network mk-bridge-761631
	I1026 02:29:27.878412   79140 main.go:141] libmachine: (bridge-761631) DBG | Getting to WaitForSSH function...
	I1026 02:29:27.878453   79140 main.go:141] libmachine: (bridge-761631) Reserved static IP address: 192.168.50.234
	I1026 02:29:27.878467   79140 main.go:141] libmachine: (bridge-761631) Waiting for SSH to be available...
	I1026 02:29:27.881553   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.882058   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:27.882088   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.882238   79140 main.go:141] libmachine: (bridge-761631) DBG | Using SSH client type: external
	I1026 02:29:27.882266   79140 main.go:141] libmachine: (bridge-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa (-rw-------)
	I1026 02:29:27.882294   79140 main.go:141] libmachine: (bridge-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:29:27.882325   79140 main.go:141] libmachine: (bridge-761631) DBG | About to run SSH command:
	I1026 02:29:27.882337   79140 main.go:141] libmachine: (bridge-761631) DBG | exit 0
	I1026 02:29:28.009463   79140 main.go:141] libmachine: (bridge-761631) DBG | SSH cmd err, output: <nil>: 
	I1026 02:29:28.009733   79140 main.go:141] libmachine: (bridge-761631) KVM machine creation complete!
	I1026 02:29:28.010053   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:28.010540   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.010723   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.010878   79140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:29:28.010891   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:28.012129   79140 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:29:28.012143   79140 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:29:28.012149   79140 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:29:28.012164   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.014418   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.014769   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.014795   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.014961   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.015109   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.015246   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.015358   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.015470   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.015657   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.015667   79140 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:29:28.120712   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:28.120741   79140 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:29:28.120749   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.123722   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.124062   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.124088   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.124294   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.124490   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.124640   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.124763   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.124922   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.125173   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.125188   79140 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:29:28.233890   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:29:28.233981   79140 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:29:28.233994   79140 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:29:28.234006   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.234260   79140 buildroot.go:166] provisioning hostname "bridge-761631"
	I1026 02:29:28.234288   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.234470   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.236972   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.237358   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.237385   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.237545   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.237702   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.237848   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.237977   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.238127   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.238336   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.238348   79140 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-761631 && echo "bridge-761631" | sudo tee /etc/hostname
	I1026 02:29:28.358934   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-761631
	
	I1026 02:29:28.358968   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.361630   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.361980   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.362006   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.362152   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.362336   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.362488   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.362601   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.362925   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.363138   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.363154   79140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-761631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-761631/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-761631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:29:28.483387   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:28.483413   79140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:29:28.483466   79140 buildroot.go:174] setting up certificates
	I1026 02:29:28.483478   79140 provision.go:84] configureAuth start
	I1026 02:29:28.483488   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.483738   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:28.486204   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.486517   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.486554   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.486692   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.488771   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.489078   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.489113   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.489181   79140 provision.go:143] copyHostCerts
	I1026 02:29:28.489267   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:29:28.489283   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:29:28.489350   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:29:28.489492   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:29:28.489501   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:29:28.489531   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:29:28.489618   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:29:28.489627   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:29:28.489654   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:29:28.489738   79140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.bridge-761631 san=[127.0.0.1 192.168.50.234 bridge-761631 localhost minikube]
	I1026 02:29:28.606055   79140 provision.go:177] copyRemoteCerts
	I1026 02:29:28.606128   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:29:28.606157   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.608894   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.609268   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.609294   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.609532   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.609700   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.609832   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.609925   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:28.695542   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:29:28.719597   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 02:29:28.741478   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 02:29:28.762510   79140 provision.go:87] duration metric: took 279.018391ms to configureAuth
	I1026 02:29:28.762541   79140 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:29:28.762714   79140 config.go:182] Loaded profile config "bridge-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:28.762780   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.765305   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.765735   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.765769   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.765907   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.766068   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.766220   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.766347   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.766500   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.766707   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.766723   79140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:29:28.990952   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:29:28.990986   79140 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:29:28.990996   79140 main.go:141] libmachine: (bridge-761631) Calling .GetURL
	I1026 02:29:28.992009   79140 main.go:141] libmachine: (bridge-761631) DBG | Using libvirt version 6000000
	I1026 02:29:28.994355   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.994667   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.994708   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.994861   79140 main.go:141] libmachine: Docker is up and running!
	I1026 02:29:28.994877   79140 main.go:141] libmachine: Reticulating splines...
	I1026 02:29:28.994883   79140 client.go:171] duration metric: took 25.435212479s to LocalClient.Create
	I1026 02:29:28.994904   79140 start.go:167] duration metric: took 25.435274209s to libmachine.API.Create "bridge-761631"
	I1026 02:29:28.994911   79140 start.go:293] postStartSetup for "bridge-761631" (driver="kvm2")
	I1026 02:29:28.994929   79140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:29:28.994946   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.995173   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:29:28.995201   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.997253   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.997615   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.997644   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.997817   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.997978   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.998112   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.998248   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.083262   79140 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:29:29.087047   79140 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:29:29.087079   79140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:29:29.087151   79140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:29:29.087269   79140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:29:29.087386   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:29:29.096639   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:29.119723   79140 start.go:296] duration metric: took 124.795288ms for postStartSetup
	I1026 02:29:29.119781   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:29.120466   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:29.123262   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.123663   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.123690   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.123909   79140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json ...
	I1026 02:29:29.124105   79140 start.go:128] duration metric: took 25.58589441s to createHost
	I1026 02:29:29.124126   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.126058   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.126404   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.126425   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.126610   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.126763   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.126888   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.127003   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.127167   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:29.127326   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:29.127336   79140 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:29:29.238917   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909769.216503060
	
	I1026 02:29:29.238942   79140 fix.go:216] guest clock: 1729909769.216503060
	I1026 02:29:29.238952   79140 fix.go:229] Guest: 2024-10-26 02:29:29.21650306 +0000 UTC Remote: 2024-10-26 02:29:29.124116784 +0000 UTC m=+32.306517015 (delta=92.386276ms)
	I1026 02:29:29.238985   79140 fix.go:200] guest clock delta is within tolerance: 92.386276ms
	I1026 02:29:29.238990   79140 start.go:83] releasing machines lock for "bridge-761631", held for 25.700950423s
	I1026 02:29:29.239006   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.239266   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:29.242242   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.242613   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.242640   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.242845   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243277   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243435   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243545   79140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:29:29.243589   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.243691   79140 ssh_runner.go:195] Run: cat /version.json
	I1026 02:29:29.243715   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.246033   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246357   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246378   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.246409   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246550   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.246696   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.246822   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.246849   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246866   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.247016   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.247046   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.247207   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.247368   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.247488   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.356403   79140 ssh_runner.go:195] Run: systemctl --version
	I1026 02:29:29.362585   79140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:29:29.519312   79140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:29:29.524546   79140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:29:29.524614   79140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:29:29.540036   79140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:29:29.540062   79140 start.go:495] detecting cgroup driver to use...
	I1026 02:29:29.540119   79140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:29:29.555431   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:29:29.568499   79140 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:29:29.568557   79140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:29:29.581888   79140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:29:29.594423   79140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:29:29.708968   79140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:29:29.887103   79140 docker.go:233] disabling docker service ...
	I1026 02:29:29.887184   79140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:29:29.902996   79140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:29:29.917323   79140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:29:30.055076   79140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:29:30.176154   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:29:30.191040   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:29:30.214132   79140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:29:30.214183   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.225094   79140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:29:30.225158   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.235756   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.245621   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.256117   79140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:29:30.269296   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.280377   79140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.300325   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.310664   79140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:29:30.321276   79140 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:29:30.321337   79140 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:29:30.335449   79140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:29:30.344654   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:30.477091   79140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:29:30.560664   79140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:29:30.560746   79140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:29:30.565026   79140 start.go:563] Will wait 60s for crictl version
	I1026 02:29:30.565078   79140 ssh_runner.go:195] Run: which crictl
	I1026 02:29:30.568710   79140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:29:30.611177   79140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:29:30.611273   79140 ssh_runner.go:195] Run: crio --version
	I1026 02:29:30.641197   79140 ssh_runner.go:195] Run: crio --version
	I1026 02:29:30.675790   79140 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:29:28.571391   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:31.069756   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:30.676943   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:30.679671   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:30.680030   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:30.680093   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:30.680233   79140 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1026 02:29:30.684283   79140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:30.696184   79140 kubeadm.go:883] updating cluster {Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:29:30.696294   79140 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:29:30.696345   79140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:30.730209   79140 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:29:30.730280   79140 ssh_runner.go:195] Run: which lz4
	I1026 02:29:30.733919   79140 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:29:30.737681   79140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:29:30.737710   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:29:33.070906   77486 node_ready.go:49] node "flannel-761631" has status "Ready":"True"
	I1026 02:29:33.070944   77486 node_ready.go:38] duration metric: took 8.504544013s for node "flannel-761631" to be "Ready" ...
	I1026 02:29:33.070957   77486 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:33.080658   77486 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:35.090267   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:32.017369   79140 crio.go:462] duration metric: took 1.28349159s to copy over tarball
	I1026 02:29:32.017500   79140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:29:34.236021   79140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218492472s)
	I1026 02:29:34.236046   79140 crio.go:469] duration metric: took 2.218648878s to extract the tarball
	I1026 02:29:34.236053   79140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:29:34.271451   79140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:34.312396   79140 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:29:34.312423   79140 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:29:34.312433   79140 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.2 crio true true} ...
	I1026 02:29:34.312539   79140 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-761631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1026 02:29:34.312621   79140 ssh_runner.go:195] Run: crio config
	I1026 02:29:34.361402   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:29:34.361449   79140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:29:34.361476   79140 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-761631 NodeName:bridge-761631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:29:34.361620   79140 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-761631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:29:34.361691   79140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:29:34.374250   79140 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:29:34.374322   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:29:34.384039   79140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 02:29:34.403170   79140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:29:34.421890   79140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 02:29:34.438089   79140 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I1026 02:29:34.442189   79140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:34.454244   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:34.578036   79140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:34.597007   79140 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631 for IP: 192.168.50.234
	I1026 02:29:34.597035   79140 certs.go:194] generating shared ca certs ...
	I1026 02:29:34.597055   79140 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.597240   79140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:29:34.597297   79140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:29:34.597310   79140 certs.go:256] generating profile certs ...
	I1026 02:29:34.597381   79140 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key
	I1026 02:29:34.597400   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt with IP's: []
	I1026 02:29:34.741373   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt ...
	I1026 02:29:34.741401   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: {Name:mkc4cd4d1bccd5089183954b26279211f5d756cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.741586   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key ...
	I1026 02:29:34.741598   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key: {Name:mkdc10d78a03b28651203ac3496bcd643469f528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.741677   79140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd
	I1026 02:29:34.741692   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.234]
	I1026 02:29:34.855620   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd ...
	I1026 02:29:34.855649   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd: {Name:mk925642f87d27331d95b4da2e25b3e311a30842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.855799   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd ...
	I1026 02:29:34.855811   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd: {Name:mk9740863d6253ed528a1333e9f3510e5305462d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.855877   79140 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt
	I1026 02:29:34.855952   79140 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key
	I1026 02:29:34.856002   79140 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key
	I1026 02:29:34.856015   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt with IP's: []
	I1026 02:29:35.015622   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt ...
	I1026 02:29:35.015648   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt: {Name:mkbf3e835a69ee7e48d04f654560e899bd3b3674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:35.015795   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key ...
	I1026 02:29:35.015805   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key: {Name:mke780832f50437d0a211749f10c83e726275217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:35.015975   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:29:35.016012   79140 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:29:35.016018   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:29:35.016039   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:29:35.016064   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:29:35.016085   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:29:35.016142   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:35.016685   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:29:35.040122   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:29:35.062557   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:29:35.087485   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:29:35.110454   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 02:29:35.133145   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 02:29:35.155074   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:29:35.177163   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 02:29:35.198652   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:29:35.219923   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:29:35.241241   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:29:35.273004   79140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:29:35.296045   79140 ssh_runner.go:195] Run: openssl version
	I1026 02:29:35.301917   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:29:35.312175   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.316671   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.316731   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.322719   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:29:35.333477   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:29:35.344537   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.349048   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.349098   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.354681   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:29:35.364998   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:29:35.375616   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.380100   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.380158   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.385818   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:29:35.396002   79140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:29:35.400041   79140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 02:29:35.400091   79140 kubeadm.go:392] StartCluster: {Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:29:35.400166   79140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:29:35.400207   79140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:29:35.435104   79140 cri.go:89] found id: ""
	I1026 02:29:35.435176   79140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:29:35.444318   79140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:29:35.453219   79140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:29:35.462317   79140 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:29:35.462333   79140 kubeadm.go:157] found existing configuration files:
	
	I1026 02:29:35.462369   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:29:35.471145   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:29:35.471253   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:29:35.480515   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:29:35.489384   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:29:35.489458   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:29:35.498443   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:29:35.507149   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:29:35.507205   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:29:35.515975   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:29:35.524194   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:29:35.524249   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:29:35.533114   79140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:29:35.703210   79140 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:29:37.588135   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:40.087422   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:45.919150   79140 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:29:45.919229   79140 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:29:45.919336   79140 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:29:45.919438   79140 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:29:45.919542   79140 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:29:45.919636   79140 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:29:45.921123   79140 out.go:235]   - Generating certificates and keys ...
	I1026 02:29:45.921211   79140 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:29:45.921284   79140 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:29:45.921375   79140 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 02:29:45.921476   79140 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 02:29:45.921569   79140 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 02:29:45.921685   79140 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 02:29:45.921779   79140 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 02:29:45.921937   79140 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-761631 localhost] and IPs [192.168.50.234 127.0.0.1 ::1]
	I1026 02:29:45.922005   79140 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 02:29:45.922173   79140 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-761631 localhost] and IPs [192.168.50.234 127.0.0.1 ::1]
	I1026 02:29:45.922256   79140 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 02:29:45.922338   79140 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 02:29:45.922403   79140 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 02:29:45.922447   79140 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:29:45.922493   79140 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:29:45.922571   79140 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:29:45.922645   79140 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:29:45.922740   79140 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:29:45.922818   79140 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:29:45.922953   79140 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:29:45.923031   79140 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:29:45.924258   79140 out.go:235]   - Booting up control plane ...
	I1026 02:29:45.924346   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:29:45.924440   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:29:45.924527   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:29:45.924670   79140 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:29:45.924805   79140 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:29:45.924864   79140 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:29:45.925051   79140 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:29:45.925186   79140 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:29:45.925265   79140 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.99081ms
	I1026 02:29:45.925353   79140 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:29:45.925450   79140 kubeadm.go:310] [api-check] The API server is healthy after 5.001677707s
	I1026 02:29:45.925594   79140 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 02:29:45.925767   79140 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 02:29:45.925845   79140 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 02:29:45.926066   79140 kubeadm.go:310] [mark-control-plane] Marking the node bridge-761631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 02:29:45.926130   79140 kubeadm.go:310] [bootstrap-token] Using token: 3a94l2.wnr5sqdsr9c515xe
	I1026 02:29:45.928085   79140 out.go:235]   - Configuring RBAC rules ...
	I1026 02:29:45.928193   79140 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 02:29:45.928296   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 02:29:45.928439   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 02:29:45.928567   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 02:29:45.928690   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 02:29:45.928799   79140 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 02:29:45.928970   79140 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 02:29:45.929015   79140 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 02:29:45.929055   79140 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 02:29:45.929063   79140 kubeadm.go:310] 
	I1026 02:29:45.929112   79140 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 02:29:45.929119   79140 kubeadm.go:310] 
	I1026 02:29:45.929199   79140 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 02:29:45.929207   79140 kubeadm.go:310] 
	I1026 02:29:45.929228   79140 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 02:29:45.929308   79140 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 02:29:45.929381   79140 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 02:29:45.929393   79140 kubeadm.go:310] 
	I1026 02:29:45.929488   79140 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 02:29:45.929497   79140 kubeadm.go:310] 
	I1026 02:29:45.929566   79140 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 02:29:45.929576   79140 kubeadm.go:310] 
	I1026 02:29:45.929652   79140 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 02:29:45.929772   79140 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 02:29:45.929863   79140 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 02:29:45.929872   79140 kubeadm.go:310] 
	I1026 02:29:45.929973   79140 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 02:29:45.930083   79140 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 02:29:45.930095   79140 kubeadm.go:310] 
	I1026 02:29:45.930191   79140 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3a94l2.wnr5sqdsr9c515xe \
	I1026 02:29:45.930310   79140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 02:29:45.930339   79140 kubeadm.go:310] 	--control-plane 
	I1026 02:29:45.930349   79140 kubeadm.go:310] 
	I1026 02:29:45.930435   79140 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 02:29:45.930451   79140 kubeadm.go:310] 
	I1026 02:29:45.930524   79140 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3a94l2.wnr5sqdsr9c515xe \
	I1026 02:29:45.930634   79140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 02:29:45.930649   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:29:45.931993   79140 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:29:42.088382   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:44.089678   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:45.586605   77486 pod_ready.go:93] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.586629   77486 pod_ready.go:82] duration metric: took 12.505940428s for pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.586639   77486 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.592528   77486 pod_ready.go:93] pod "etcd-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.592545   77486 pod_ready.go:82] duration metric: took 5.900244ms for pod "etcd-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.592554   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.597525   77486 pod_ready.go:93] pod "kube-apiserver-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.597541   77486 pod_ready.go:82] duration metric: took 4.982933ms for pod "kube-apiserver-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.597550   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.602472   77486 pod_ready.go:93] pod "kube-controller-manager-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.602487   77486 pod_ready.go:82] duration metric: took 4.931952ms for pod "kube-controller-manager-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.602496   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5gn8b" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.607418   77486 pod_ready.go:93] pod "kube-proxy-5gn8b" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.607436   77486 pod_ready.go:82] duration metric: took 4.933679ms for pod "kube-proxy-5gn8b" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.607445   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.984741   77486 pod_ready.go:93] pod "kube-scheduler-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.984765   77486 pod_ready.go:82] duration metric: took 377.314061ms for pod "kube-scheduler-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.984776   77486 pod_ready.go:39] duration metric: took 12.913779647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:45.984789   77486 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:29:45.984836   77486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:29:45.999759   77486 api_server.go:72] duration metric: took 21.658882332s to wait for apiserver process to appear ...
	I1026 02:29:45.999790   77486 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:29:45.999813   77486 api_server.go:253] Checking apiserver healthz at https://192.168.61.248:8443/healthz ...
	I1026 02:29:46.005506   77486 api_server.go:279] https://192.168.61.248:8443/healthz returned 200:
	ok
	I1026 02:29:46.006785   77486 api_server.go:141] control plane version: v1.31.2
	I1026 02:29:46.006821   77486 api_server.go:131] duration metric: took 7.023624ms to wait for apiserver health ...
	I1026 02:29:46.006830   77486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:29:46.188748   77486 system_pods.go:59] 7 kube-system pods found
	I1026 02:29:46.188783   77486 system_pods.go:61] "coredns-7c65d6cfc9-46w28" [95524e8b-ebae-4f82-bb93-4c0877c206d7] Running
	I1026 02:29:46.188790   77486 system_pods.go:61] "etcd-flannel-761631" [021f5e7d-f838-41e2-8760-fa7d43b47f97] Running
	I1026 02:29:46.188796   77486 system_pods.go:61] "kube-apiserver-flannel-761631" [370db70a-478d-475b-89f5-f8f78bd856e6] Running
	I1026 02:29:46.188802   77486 system_pods.go:61] "kube-controller-manager-flannel-761631" [23d06d27-2e1f-423b-9314-6193d5812f94] Running
	I1026 02:29:46.188806   77486 system_pods.go:61] "kube-proxy-5gn8b" [9a895cde-6d7b-42aa-ad9e-49943865b4fe] Running
	I1026 02:29:46.188811   77486 system_pods.go:61] "kube-scheduler-flannel-761631" [47391923-c6fb-4b72-b107-6ccf6a1be461] Running
	I1026 02:29:46.188818   77486 system_pods.go:61] "storage-provisioner" [4f546ad1-6af3-40e6-bbb6-4a23e6424ff3] Running
	I1026 02:29:46.188825   77486 system_pods.go:74] duration metric: took 181.988223ms to wait for pod list to return data ...
	I1026 02:29:46.188833   77486 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:29:46.384239   77486 default_sa.go:45] found service account: "default"
	I1026 02:29:46.384263   77486 default_sa.go:55] duration metric: took 195.42289ms for default service account to be created ...
	I1026 02:29:46.384272   77486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:29:46.587198   77486 system_pods.go:86] 7 kube-system pods found
	I1026 02:29:46.587226   77486 system_pods.go:89] "coredns-7c65d6cfc9-46w28" [95524e8b-ebae-4f82-bb93-4c0877c206d7] Running
	I1026 02:29:46.587236   77486 system_pods.go:89] "etcd-flannel-761631" [021f5e7d-f838-41e2-8760-fa7d43b47f97] Running
	I1026 02:29:46.587242   77486 system_pods.go:89] "kube-apiserver-flannel-761631" [370db70a-478d-475b-89f5-f8f78bd856e6] Running
	I1026 02:29:46.587248   77486 system_pods.go:89] "kube-controller-manager-flannel-761631" [23d06d27-2e1f-423b-9314-6193d5812f94] Running
	I1026 02:29:46.587254   77486 system_pods.go:89] "kube-proxy-5gn8b" [9a895cde-6d7b-42aa-ad9e-49943865b4fe] Running
	I1026 02:29:46.587260   77486 system_pods.go:89] "kube-scheduler-flannel-761631" [47391923-c6fb-4b72-b107-6ccf6a1be461] Running
	I1026 02:29:46.587268   77486 system_pods.go:89] "storage-provisioner" [4f546ad1-6af3-40e6-bbb6-4a23e6424ff3] Running
	I1026 02:29:46.587276   77486 system_pods.go:126] duration metric: took 202.998368ms to wait for k8s-apps to be running ...
	I1026 02:29:46.587291   77486 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:29:46.587335   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:29:46.601033   77486 system_svc.go:56] duration metric: took 13.736973ms WaitForService to wait for kubelet
	I1026 02:29:46.601084   77486 kubeadm.go:582] duration metric: took 22.260202048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:29:46.601101   77486 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:29:46.784852   77486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:29:46.784878   77486 node_conditions.go:123] node cpu capacity is 2
	I1026 02:29:46.784891   77486 node_conditions.go:105] duration metric: took 183.785972ms to run NodePressure ...
	I1026 02:29:46.784901   77486 start.go:241] waiting for startup goroutines ...
	I1026 02:29:46.784907   77486 start.go:246] waiting for cluster config update ...
	I1026 02:29:46.784916   77486 start.go:255] writing updated cluster config ...
	I1026 02:29:46.785195   77486 ssh_runner.go:195] Run: rm -f paused
	I1026 02:29:46.829900   77486 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:29:46.831605   77486 out.go:177] * Done! kubectl is now configured to use "flannel-761631" cluster and "default" namespace by default
	W1026 02:29:46.840457   77486 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 697471db-7ca4-44ca-9cc4-0edbe17bfeea
	I1026 02:29:45.933282   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:29:45.946292   79140 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:29:45.963252   79140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:29:45.963311   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:45.963351   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-761631 minikube.k8s.io/updated_at=2024_10_26T02_29_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=bridge-761631 minikube.k8s.io/primary=true
	I1026 02:29:46.084640   79140 ops.go:34] apiserver oom_adj: -16
	I1026 02:29:46.084755   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:46.585533   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:47.085231   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:47.585132   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:48.085107   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:48.584952   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:49.085062   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:49.585704   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:50.085441   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:50.179423   79140 kubeadm.go:1113] duration metric: took 4.216166839s to wait for elevateKubeSystemPrivileges
	I1026 02:29:50.179462   79140 kubeadm.go:394] duration metric: took 14.779373824s to StartCluster
	I1026 02:29:50.179485   79140 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:50.179566   79140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:29:50.180656   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:50.180888   79140 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:50.180923   79140 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:29:50.180903   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 02:29:50.181035   79140 addons.go:69] Setting default-storageclass=true in profile "bridge-761631"
	I1026 02:29:50.181060   79140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-761631"
	I1026 02:29:50.181069   79140 config.go:182] Loaded profile config "bridge-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:50.181025   79140 addons.go:69] Setting storage-provisioner=true in profile "bridge-761631"
	I1026 02:29:50.181144   79140 addons.go:234] Setting addon storage-provisioner=true in "bridge-761631"
	I1026 02:29:50.181189   79140 host.go:66] Checking if "bridge-761631" exists ...
	I1026 02:29:50.181648   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.181657   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.181701   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.181734   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.182595   79140 out.go:177] * Verifying Kubernetes components...
	I1026 02:29:50.183753   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:50.196965   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I1026 02:29:50.196966   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1026 02:29:50.197446   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.197502   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.198034   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.198055   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.198178   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.198205   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.198414   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.198572   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.198605   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.199148   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.199196   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.202486   79140 addons.go:234] Setting addon default-storageclass=true in "bridge-761631"
	I1026 02:29:50.202532   79140 host.go:66] Checking if "bridge-761631" exists ...
	I1026 02:29:50.202943   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.202990   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.215612   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1026 02:29:50.216209   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.216739   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.216770   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.217132   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.217347   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.218601   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1026 02:29:50.219227   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.219301   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:50.219773   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.219797   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.220067   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.220504   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.220543   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.221017   79140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:29:50.222310   79140 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:50.222328   79140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:29:50.222342   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:50.225627   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.226100   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:50.226129   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.226423   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:50.226614   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:50.226735   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:50.226868   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:50.237479   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I1026 02:29:50.237972   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.238390   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.238412   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.238864   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.239000   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.240392   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:50.240643   79140 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:50.240659   79140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:29:50.240672   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:50.243239   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.243509   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:50.243528   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.243778   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:50.243954   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:50.244078   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:50.244191   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:50.418926   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 02:29:50.437628   79140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:50.556343   79140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:50.663762   79140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:50.858117   79140 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1026 02:29:50.859228   79140 node_ready.go:35] waiting up to 15m0s for node "bridge-761631" to be "Ready" ...
	I1026 02:29:50.875628   79140 node_ready.go:49] node "bridge-761631" has status "Ready":"True"
	I1026 02:29:50.875655   79140 node_ready.go:38] duration metric: took 16.40424ms for node "bridge-761631" to be "Ready" ...
	I1026 02:29:50.875668   79140 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:50.893111   79140 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:50.989070   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:50.989125   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:50.989395   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:50.989428   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:50.989438   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:50.989442   79140 main.go:141] libmachine: (bridge-761631) DBG | Closing plugin on server side
	I1026 02:29:50.989448   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:50.989715   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:50.989732   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.004388   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.004411   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.004728   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.004823   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.004799   79140 main.go:141] libmachine: (bridge-761631) DBG | Closing plugin on server side
	I1026 02:29:51.380622   79140 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-761631" context rescaled to 1 replicas
	I1026 02:29:51.493156   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.493186   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.493458   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.493472   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.493481   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.493488   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.493752   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.493769   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.495504   79140 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1026 02:29:51.496896   79140 addons.go:510] duration metric: took 1.315972716s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 02:29:52.899525   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:54.899623   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:56.900017   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:59.398542   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:01.399647   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:01.899831   79140 pod_ready.go:98] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:30:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.234 HostIPs:[{IP:192.168.50
.234}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:29:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:29:51 +0000 UTC,FinishedAt:2024-10-26 02:30:01 +0000 UTC,ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d Started:0xc00203af90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000882800} {Name:kube-api-access-f8bsr MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000882810}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:30:01.899871   79140 pod_ready.go:82] duration metric: took 11.00673045s for pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace to be "Ready" ...
	E1026 02:30:01.899886   79140 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:30:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.234 HostIPs:[{IP:192.168.50.234}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:29:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:29:51 +0000 UTC,FinishedAt:2024-10-26 02:30:01 +0000 UTC,ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d Started:0xc00203af90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000882800} {Name:kube-api-access-f8bsr MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000882810}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:30:01.899902   79140 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:03.906256   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:05.929917   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:08.406456   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:10.907420   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:13.405880   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:15.406701   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:17.906220   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:19.906580   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:22.406051   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:24.406236   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:26.407004   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:28.905840   79140 pod_ready.go:93] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.905866   79140 pod_ready.go:82] duration metric: took 27.00595527s for pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.905877   79140 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.909975   79140 pod_ready.go:93] pod "etcd-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.909998   79140 pod_ready.go:82] duration metric: took 4.113104ms for pod "etcd-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.910007   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.913593   79140 pod_ready.go:93] pod "kube-apiserver-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.913613   79140 pod_ready.go:82] duration metric: took 3.599819ms for pod "kube-apiserver-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.913621   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.918213   79140 pod_ready.go:93] pod "kube-controller-manager-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.918232   79140 pod_ready.go:82] duration metric: took 4.60513ms for pod "kube-controller-manager-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.918240   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b657k" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.924642   79140 pod_ready.go:93] pod "kube-proxy-b657k" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.924662   79140 pod_ready.go:82] duration metric: took 6.416092ms for pod "kube-proxy-b657k" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.924670   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:29.305235   79140 pod_ready.go:93] pod "kube-scheduler-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:29.305259   79140 pod_ready.go:82] duration metric: took 380.583389ms for pod "kube-scheduler-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:29.305267   79140 pod_ready.go:39] duration metric: took 38.429587744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:30:29.305282   79140 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:30:29.305347   79140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:30:29.321031   79140 api_server.go:72] duration metric: took 39.140108344s to wait for apiserver process to appear ...
	I1026 02:30:29.321059   79140 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:30:29.321078   79140 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I1026 02:30:29.325233   79140 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I1026 02:30:29.326297   79140 api_server.go:141] control plane version: v1.31.2
	I1026 02:30:29.326322   79140 api_server.go:131] duration metric: took 5.254713ms to wait for apiserver health ...
	I1026 02:30:29.326330   79140 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:30:29.506763   79140 system_pods.go:59] 7 kube-system pods found
	I1026 02:30:29.506790   79140 system_pods.go:61] "coredns-7c65d6cfc9-nggsr" [56b01394-480f-495b-922a-ed2b483f294e] Running
	I1026 02:30:29.506795   79140 system_pods.go:61] "etcd-bridge-761631" [67fe00a3-64c4-4206-91eb-821af3fef7da] Running
	I1026 02:30:29.506798   79140 system_pods.go:61] "kube-apiserver-bridge-761631" [b2d08738-29e9-410e-aa6a-373816a7d585] Running
	I1026 02:30:29.506802   79140 system_pods.go:61] "kube-controller-manager-bridge-761631" [8f000fcc-5dca-4b07-87fd-7dbf09ed82c4] Running
	I1026 02:30:29.506805   79140 system_pods.go:61] "kube-proxy-b657k" [9afd730f-3a54-454b-9188-f1f24192cf54] Running
	I1026 02:30:29.506808   79140 system_pods.go:61] "kube-scheduler-bridge-761631" [1cac5675-b5aa-4239-b6c6-1d3b5d9e69cf] Running
	I1026 02:30:29.506810   79140 system_pods.go:61] "storage-provisioner" [c600327b-8a81-46eb-9730-37f8e45fe0be] Running
	I1026 02:30:29.506816   79140 system_pods.go:74] duration metric: took 180.479854ms to wait for pod list to return data ...
	I1026 02:30:29.506821   79140 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:30:29.704417   79140 default_sa.go:45] found service account: "default"
	I1026 02:30:29.704444   79140 default_sa.go:55] duration metric: took 197.616958ms for default service account to be created ...
	I1026 02:30:29.704453   79140 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:30:29.906080   79140 system_pods.go:86] 7 kube-system pods found
	I1026 02:30:29.906119   79140 system_pods.go:89] "coredns-7c65d6cfc9-nggsr" [56b01394-480f-495b-922a-ed2b483f294e] Running
	I1026 02:30:29.906128   79140 system_pods.go:89] "etcd-bridge-761631" [67fe00a3-64c4-4206-91eb-821af3fef7da] Running
	I1026 02:30:29.906134   79140 system_pods.go:89] "kube-apiserver-bridge-761631" [b2d08738-29e9-410e-aa6a-373816a7d585] Running
	I1026 02:30:29.906139   79140 system_pods.go:89] "kube-controller-manager-bridge-761631" [8f000fcc-5dca-4b07-87fd-7dbf09ed82c4] Running
	I1026 02:30:29.906145   79140 system_pods.go:89] "kube-proxy-b657k" [9afd730f-3a54-454b-9188-f1f24192cf54] Running
	I1026 02:30:29.906148   79140 system_pods.go:89] "kube-scheduler-bridge-761631" [1cac5675-b5aa-4239-b6c6-1d3b5d9e69cf] Running
	I1026 02:30:29.906152   79140 system_pods.go:89] "storage-provisioner" [c600327b-8a81-46eb-9730-37f8e45fe0be] Running
	I1026 02:30:29.906158   79140 system_pods.go:126] duration metric: took 201.700394ms to wait for k8s-apps to be running ...
	I1026 02:30:29.906164   79140 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:30:29.906210   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:30:29.920749   79140 system_svc.go:56] duration metric: took 14.573227ms WaitForService to wait for kubelet
	I1026 02:30:29.920779   79140 kubeadm.go:582] duration metric: took 39.739859653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:30:29.920802   79140 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:30:30.104198   79140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:30:30.104222   79140 node_conditions.go:123] node cpu capacity is 2
	I1026 02:30:30.104232   79140 node_conditions.go:105] duration metric: took 183.42671ms to run NodePressure ...
	I1026 02:30:30.104243   79140 start.go:241] waiting for startup goroutines ...
	I1026 02:30:30.104250   79140 start.go:246] waiting for cluster config update ...
	I1026 02:30:30.104260   79140 start.go:255] writing updated cluster config ...
	I1026 02:30:30.104497   79140 ssh_runner.go:195] Run: rm -f paused
	I1026 02:30:30.151937   79140 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:30:30.153939   79140 out.go:177] * Done! kubectl is now configured to use "bridge-761631" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.497779411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910047497746949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f69d786e-00e6-451c-b211-33c133dd0b01 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.498282119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1b9baed-44d9-4409-bc9d-ced19f0a3f2a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.498371098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1b9baed-44d9-4409-bc9d-ced19f0a3f2a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.498672893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1b9baed-44d9-4409-bc9d-ced19f0a3f2a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.538418773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=014378cd-3885-4133-9181-ec2e9ea712ec name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.538531064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=014378cd-3885-4133-9181-ec2e9ea712ec name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.540011880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b683693f-3d12-4e36-ad05-a73861255b35 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.540650838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910047540616321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b683693f-3d12-4e36-ad05-a73861255b35 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.541373919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57c5c426-31d8-4f7f-b1ca-394a1291a51e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.541426518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57c5c426-31d8-4f7f-b1ca-394a1291a51e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.541700757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57c5c426-31d8-4f7f-b1ca-394a1291a51e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.581064851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=637e79d7-1d0e-42dc-8182-c25739f7c744 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.581161930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=637e79d7-1d0e-42dc-8182-c25739f7c744 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.582540102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30e5a169-44bc-45d6-924e-892ab7721f97 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.583041968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910047583016578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30e5a169-44bc-45d6-924e-892ab7721f97 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.583853196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2687f3b-bf3a-4621-ae1f-8876546c5b52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.583924800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2687f3b-bf3a-4621-ae1f-8876546c5b52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.584175094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2687f3b-bf3a-4621-ae1f-8876546c5b52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.619043617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8deb2502-b30e-46c7-9256-f3797196b0ec name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.619140947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8deb2502-b30e-46c7-9256-f3797196b0ec name=/runtime.v1.RuntimeService/Version
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.620667952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5863e677-58df-44c5-83ef-63f60d63e4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.621424349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910047621393106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5863e677-58df-44c5-83ef-63f60d63e4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.622067789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4676579e-7662-4045-be29-022396b64627 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.622142263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4676579e-7662-4045-be29-022396b64627 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:34:07 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:34:07.622628032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4676579e-7662-4045-be29-022396b64627 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f5715a92670a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   ff2b794780fc5       storage-provisioner
	f7ce0e41970bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ca8867f88fa0b       busybox
	e298a85093930       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   1c67ad179fc6a       coredns-7c65d6cfc9-xpxp4
	da7e523b4bbb0       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   a1028bd8f05ef       kube-proxy-c947q
	17b28d6cdb6a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   ff2b794780fc5       storage-provisioner
	b57cb0310518d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   5532133f711cf       etcd-default-k8s-diff-port-661357
	c185a46f0bdfd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   29430ce1be5a4       kube-scheduler-default-k8s-diff-port-661357
	c7c70f177d310       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   500d0afc9dfd3       kube-apiserver-default-k8s-diff-port-661357
	a4307158d97a1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   29ed2f42a7fd5       kube-controller-manager-default-k8s-diff-port-661357
	
	
	==> coredns [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51918 - 41826 "HINFO IN 4582937509147534390.1757325468208855726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025059784s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-661357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-661357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=default-k8s-diff-port-661357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T02_12_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 02:11:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-661357
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:33:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:31:22 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:31:22 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:31:22 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:31:22 +0000   Sat, 26 Oct 2024 02:20:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.18
	  Hostname:    default-k8s-diff-port-661357
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3995c3d63394bf89d65eca9d2425260
	  System UUID:                c3995c3d-6339-4bf8-9d65-eca9d2425260
	  Boot ID:                    6939014d-c7b4-47cf-adfa-355e3ba8660d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-xpxp4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-default-k8s-diff-port-661357                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-661357             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-661357    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-c947q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-661357             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-jkl5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-661357 event: Registered Node default-k8s-diff-port-661357 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-661357 event: Registered Node default-k8s-diff-port-661357 in Controller
	
	
	==> dmesg <==
	[Oct26 02:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051355] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037341] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876726] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568076] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.624957] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.062482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062809] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.202814] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.117981] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.272701] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.082704] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.812008] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059730] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.498520] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.484663] systemd-fstab-generator[1539]: Ignoring "noauto" option for root device
	[  +3.239561] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.144037] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72] <==
	{"level":"info","ts":"2024-10-26T02:27:17.702682Z","caller":"traceutil/trace.go:171","msg":"trace[959353859] linearizableReadLoop","detail":"{readStateIndex:1052; appliedIndex:1051; }","duration":"273.535625ms","start":"2024-10-26T02:27:17.429126Z","end":"2024-10-26T02:27:17.702662Z","steps":["trace[959353859] 'read index received'  (duration: 49.133916ms)","trace[959353859] 'applied index is now lower than readState.Index'  (duration: 224.400386ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:27:17.702819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.672158ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:27:17.702857Z","caller":"traceutil/trace.go:171","msg":"trace[207453697] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:932; }","duration":"273.730003ms","start":"2024-10-26T02:27:17.429121Z","end":"2024-10-26T02:27:17.702851Z","steps":["trace[207453697] 'agreement among raft nodes before linearized reading'  (duration: 273.656633ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:27:18.400177Z","caller":"traceutil/trace.go:171","msg":"trace[1127558973] transaction","detail":"{read_only:false; response_revision:933; number_of_response:1; }","duration":"166.58298ms","start":"2024-10-26T02:27:18.233575Z","end":"2024-10-26T02:27:18.400158Z","steps":["trace[1127558973] 'process raft request'  (duration: 166.426529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:27:52.395720Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.319893ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16281799934639470665 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.18\" mod_revision:953 > success:<request_put:<key:\"/registry/masterleases/192.168.72.18\" value_size:66 lease:7058427897784694855 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.18\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-26T02:27:52.396312Z","caller":"traceutil/trace.go:171","msg":"trace[122952709] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"234.245299ms","start":"2024-10-26T02:27:52.162049Z","end":"2024-10-26T02:27:52.396294Z","steps":["trace[122952709] 'process raft request'  (duration: 123.048429ms)","trace[122952709] 'compare'  (duration: 110.145368ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-26T02:27:52.912564Z","caller":"traceutil/trace.go:171","msg":"trace[1350654715] transaction","detail":"{read_only:false; response_revision:962; number_of_response:1; }","duration":"209.334507ms","start":"2024-10-26T02:27:52.703209Z","end":"2024-10-26T02:27:52.912544Z","steps":["trace[1350654715] 'process raft request'  (duration: 209.115568ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:28:30.661702Z","caller":"traceutil/trace.go:171","msg":"trace[713476800] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"105.697391ms","start":"2024-10-26T02:28:30.555982Z","end":"2024-10-26T02:28:30.661679Z","steps":["trace[713476800] 'process raft request'  (duration: 105.487292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:29:11.809929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.52955ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:11.810257Z","caller":"traceutil/trace.go:171","msg":"trace[2092483729] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"381.923158ms","start":"2024-10-26T02:29:11.428317Z","end":"2024-10-26T02:29:11.810240Z","steps":["trace[2092483729] 'range keys from in-memory index tree'  (duration: 381.509619ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:29:11.810281Z","caller":"traceutil/trace.go:171","msg":"trace[692266272] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"383.666502ms","start":"2024-10-26T02:29:11.426603Z","end":"2024-10-26T02:29:11.810269Z","steps":["trace[692266272] 'process raft request'  (duration: 360.516118ms)","trace[692266272] 'compare'  (duration: 22.510822ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:11.810600Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:29:11.426583Z","time spent":"383.83336ms","remote":"127.0.0.1:49422","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" mod_revision:1014 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" > >"}
	{"level":"warn","ts":"2024-10-26T02:29:11.810136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.472459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:11.811081Z","caller":"traceutil/trace.go:171","msg":"trace[1079584733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1023; }","duration":"190.442669ms","start":"2024-10-26T02:29:11.620627Z","end":"2024-10-26T02:29:11.811069Z","steps":["trace[1079584733] 'agreement among raft nodes before linearized reading'  (duration: 189.40885ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:29:11.809972Z","caller":"traceutil/trace.go:171","msg":"trace[371637022] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"189.314741ms","start":"2024-10-26T02:29:11.620631Z","end":"2024-10-26T02:29:11.809945Z","steps":["trace[371637022] 'read index received'  (duration: 166.433072ms)","trace[371637022] 'applied index is now lower than readState.Index'  (duration: 22.881095ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:35.740830Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.74092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:35.741093Z","caller":"traceutil/trace.go:171","msg":"trace[1747380013] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1043; }","duration":"121.031075ms","start":"2024-10-26T02:29:35.620044Z","end":"2024-10-26T02:29:35.741075Z","steps":["trace[1747380013] 'range keys from in-memory index tree'  (duration: 120.677879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:29:37.619443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.95841ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16281799934639471294 > lease_revoke:<id:61f492c6a029fa64>","response":"size:27"}
	{"level":"info","ts":"2024-10-26T02:29:37.619616Z","caller":"traceutil/trace.go:171","msg":"trace[1385581497] linearizableReadLoop","detail":"{readStateIndex:1191; appliedIndex:1190; }","duration":"138.346081ms","start":"2024-10-26T02:29:37.481260Z","end":"2024-10-26T02:29:37.619606Z","steps":["trace[1385581497] 'read index received'  (duration: 28.167966ms)","trace[1385581497] 'applied index is now lower than readState.Index'  (duration: 110.177046ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:37.619874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.613679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-10-26T02:29:37.619952Z","caller":"traceutil/trace.go:171","msg":"trace[1437726017] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1043; }","duration":"138.700645ms","start":"2024-10-26T02:29:37.481239Z","end":"2024-10-26T02:29:37.619939Z","steps":["trace[1437726017] 'agreement among raft nodes before linearized reading'  (duration: 138.519987ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:30:34.001866Z","caller":"traceutil/trace.go:171","msg":"trace[1268697056] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"118.261598ms","start":"2024-10-26T02:30:33.883578Z","end":"2024-10-26T02:30:34.001839Z","steps":["trace[1268697056] 'process raft request'  (duration: 118.173107ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:30:39.602955Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-10-26T02:30:39.612539Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.233046ms","hash":529089037,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-26T02:30:39.612631Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":529089037,"revision":852,"compact-revision":-1}
	
	
	==> kernel <==
	 02:34:07 up 13 min,  0 users,  load average: 0.00, 0.06, 0.07
	Linux default-k8s-diff-port-661357 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e] <==
	W1026 02:30:41.832754       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:30:41.832931       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:30:41.833746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:30:41.834852       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:31:41.834236       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:31:41.834355       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1026 02:31:41.835343       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:31:41.835443       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:31:41.835559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:31:41.836605       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:33:41.836812       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:33:41.836979       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:33:41.837017       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:33:41.837032       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:33:41.838153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:33:41.838193       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8] <==
	E1026 02:28:44.454276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:28:44.933961       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:29:14.462376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:29:14.946168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:29:44.468862       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:29:44.955328       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:30:14.475161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:30:14.963406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:30:44.481818       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:30:44.972368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:31:14.488092       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:31:14.979896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:31:22.964896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-661357"
	E1026 02:31:44.494397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:31:44.988204       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:32:03.150471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="195.279µs"
	E1026 02:32:14.499287       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:32:14.995274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:32:15.156459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="99.276µs"
	E1026 02:32:44.505301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:32:45.002098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:33:14.511487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:33:15.008705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:33:44.517649       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:33:45.015914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:20:41.671267       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:20:41.682120       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E1026 02:20:41.682253       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:20:41.728180       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:20:41.728711       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:20:41.728797       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:20:41.734738       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:20:41.735974       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:20:41.736050       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:20:41.742718       1 config.go:199] "Starting service config controller"
	I1026 02:20:41.742771       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:20:41.742792       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:20:41.742796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:20:41.743220       1 config.go:328] "Starting node config controller"
	I1026 02:20:41.743280       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:20:41.844589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:20:41.844712       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:20:41.845700       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55] <==
	I1026 02:20:39.230264       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:20:40.763086       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:20:40.763123       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:20:40.763139       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:20:40.763149       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:20:40.827012       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:20:40.827115       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:20:40.830012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:20:40.830674       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:20:40.830756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:20:40.830776       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 02:20:40.931071       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:32:57 default-k8s-diff-port-661357 kubelet[916]: E1026 02:32:57.136440     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:33:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:06.312644     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909986312120996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:06.313653     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909986312120996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:10 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:10.137127     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:33:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:16.315541     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909996315137923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:16.315824     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729909996315137923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:22 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:22.137296     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:33:26 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:26.320305     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910006317154236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:26 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:26.321075     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910006317154236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:33 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:33.136061     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:36.151171     916 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:36.323117     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910016322847389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:36.323140     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910016322847389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:44 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:44.137352     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:33:46 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:46.324554     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910026324306713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:46 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:46.324578     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910026324306713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:56 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:56.326326     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910036326114400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:56 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:56.326379     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910036326114400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:33:57 default-k8s-diff-port-661357 kubelet[916]: E1026 02:33:57.135680     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:34:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:34:06.327968     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910046327611812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:34:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:34:06.328292     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910046327611812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d] <==
	I1026 02:20:41.582048       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:21:11.585304       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723] <==
	I1026 02:21:12.431353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:21:12.442729       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:21:12.442795       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:21:29.845768       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:21:29.846184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754!
	I1026 02:21:29.849588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09cda3dd-67fa-4ae7-ae56-1289dd15961d", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754 became leader
	I1026 02:21:29.947337       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jkl5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g: exit status 1 (58.617318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jkl5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (436.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1026 02:34:10.444552   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:13.052271   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:22.972866   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:31.455815   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:33.533580   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:46.848811   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:46.855190   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:46.866599   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:46.887986   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:46.929391   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:47.010855   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:47.172396   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:47.494289   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:48.136213   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:49.417874   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:51.406104   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:51.980180   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:57.101516   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:05.461303   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:07.343838   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:14.495477   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:27.825201   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.598038   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.604428   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.615796   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.637217   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.678632   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.760926   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:30.922726   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:31.244762   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:31.887043   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:33.169207   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:35.730887   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:40.853081   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:44.895010   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:35:51.094349   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:08.787481   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:11.575787   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:13.328129   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:20.361139   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:36.417747   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:37.284626   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:47.595266   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:52.537259   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:36:55.375017   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:37:15.297705   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:37:21.599229   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:37:30.709306   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:37:49.303089   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:01.033136   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:10.882192   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:14.458673   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:28.736588   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:29.467715   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:52.558289   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:52.960918   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:38:57.170237   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:39:20.260064   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:39:46.848939   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:40:14.550731   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:40:30.598807   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:40:58.300830   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-26 02:41:23.529459691 +0000 UTC m=+7098.297222868
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-661357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.346µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-661357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-661357 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-661357 logs -n 25: (1.076261861s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-761631 sudo iptables                       | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo docker                         | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo cat                            | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo                                | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo find                           | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-761631 sudo crio                           | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-761631                                     | bridge-761631 | jenkins | v1.34.0 | 26 Oct 24 02:30 UTC | 26 Oct 24 02:30 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 02:28:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 02:28:56.856159   79140 out.go:345] Setting OutFile to fd 1 ...
	I1026 02:28:56.856276   79140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:28:56.856286   79140 out.go:358] Setting ErrFile to fd 2...
	I1026 02:28:56.856291   79140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 02:28:56.856467   79140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 02:28:56.857047   79140 out.go:352] Setting JSON to false
	I1026 02:28:56.858155   79140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7877,"bootTime":1729901860,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 02:28:56.858244   79140 start.go:139] virtualization: kvm guest
	I1026 02:28:56.860342   79140 out.go:177] * [bridge-761631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 02:28:56.861753   79140 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 02:28:56.861769   79140 notify.go:220] Checking for updates...
	I1026 02:28:56.864120   79140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 02:28:56.865457   79140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:28:56.866728   79140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:28:56.867918   79140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 02:28:56.869121   79140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 02:28:56.870974   79140 config.go:182] Loaded profile config "default-k8s-diff-port-661357": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871113   79140 config.go:182] Loaded profile config "enable-default-cni-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871248   79140 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:28:56.871360   79140 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 02:28:56.907046   79140 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 02:28:56.908208   79140 start.go:297] selected driver: kvm2
	I1026 02:28:56.908219   79140 start.go:901] validating driver "kvm2" against <nil>
	I1026 02:28:56.908230   79140 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 02:28:56.908882   79140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:28:56.908979   79140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 02:28:56.924645   79140 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 02:28:56.924692   79140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 02:28:56.924969   79140 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:28:56.924998   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:28:56.925003   79140 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 02:28:56.925054   79140 start.go:340] cluster config:
	{Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:28:56.925193   79140 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 02:28:56.926707   79140 out.go:177] * Starting "bridge-761631" primary control-plane node in "bridge-761631" cluster
	I1026 02:28:59.052672   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.053208   77486 main.go:141] libmachine: (flannel-761631) Found IP for machine: 192.168.61.248
	I1026 02:28:59.053231   77486 main.go:141] libmachine: (flannel-761631) Reserving static IP address...
	I1026 02:28:59.053241   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has current primary IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.053610   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find host DHCP lease matching {name: "flannel-761631", mac: "52:54:00:e1:ad:74", ip: "192.168.61.248"} in network mk-flannel-761631
	I1026 02:28:59.135986   77486 main.go:141] libmachine: (flannel-761631) DBG | Getting to WaitForSSH function...
	I1026 02:28:59.136019   77486 main.go:141] libmachine: (flannel-761631) Reserved static IP address: 192.168.61.248
	I1026 02:28:59.136034   77486 main.go:141] libmachine: (flannel-761631) Waiting for SSH to be available...
	I1026 02:28:59.138641   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:28:59.138894   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631
	I1026 02:28:59.138920   77486 main.go:141] libmachine: (flannel-761631) DBG | unable to find defined IP address of network mk-flannel-761631 interface with MAC address 52:54:00:e1:ad:74
	I1026 02:28:59.139100   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH client type: external
	I1026 02:28:59.139127   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa (-rw-------)
	I1026 02:28:59.139154   77486 main.go:141] libmachine: (flannel-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:28:59.139167   77486 main.go:141] libmachine: (flannel-761631) DBG | About to run SSH command:
	I1026 02:28:59.139179   77486 main.go:141] libmachine: (flannel-761631) DBG | exit 0
	I1026 02:28:59.143017   77486 main.go:141] libmachine: (flannel-761631) DBG | SSH cmd err, output: exit status 255: 
	I1026 02:28:59.143034   77486 main.go:141] libmachine: (flannel-761631) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1026 02:28:59.143041   77486 main.go:141] libmachine: (flannel-761631) DBG | command : exit 0
	I1026 02:28:59.143045   77486 main.go:141] libmachine: (flannel-761631) DBG | err     : exit status 255
	I1026 02:28:59.143052   77486 main.go:141] libmachine: (flannel-761631) DBG | output  : 
	I1026 02:28:56.927977   79140 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:28:56.928021   79140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 02:28:56.928034   79140 cache.go:56] Caching tarball of preloaded images
	I1026 02:28:56.928130   79140 preload.go:172] Found /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 02:28:56.928144   79140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1026 02:28:56.928270   79140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json ...
	I1026 02:28:56.928300   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json: {Name:mk0ea3c89d6ff01c0e3a98a985d381e9c11db97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:28:56.928473   79140 start.go:360] acquireMachinesLock for bridge-761631: {Name:mkc6876e276b7a81668ce8efec2d491cf3d18bce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 02:29:03.538004   79140 start.go:364] duration metric: took 6.609465722s to acquireMachinesLock for "bridge-761631"
	I1026 02:29:03.538075   79140 start.go:93] Provisioning new machine with config: &{Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:03.538201   79140 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 02:29:02.143351   77486 main.go:141] libmachine: (flannel-761631) DBG | Getting to WaitForSSH function...
	I1026 02:29:02.145717   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.146065   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.146093   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.146230   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH client type: external
	I1026 02:29:02.146251   77486 main.go:141] libmachine: (flannel-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa (-rw-------)
	I1026 02:29:02.146279   77486 main.go:141] libmachine: (flannel-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:29:02.146293   77486 main.go:141] libmachine: (flannel-761631) DBG | About to run SSH command:
	I1026 02:29:02.146311   77486 main.go:141] libmachine: (flannel-761631) DBG | exit 0
	I1026 02:29:02.273577   77486 main.go:141] libmachine: (flannel-761631) DBG | SSH cmd err, output: <nil>: 
	I1026 02:29:02.273798   77486 main.go:141] libmachine: (flannel-761631) KVM machine creation complete!
	I1026 02:29:02.274194   77486 main.go:141] libmachine: (flannel-761631) Calling .GetConfigRaw
	I1026 02:29:02.274821   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:02.274998   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:02.275168   77486 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:29:02.275185   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:02.276503   77486 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:29:02.276515   77486 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:29:02.276520   77486 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:29:02.276525   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.278979   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.279313   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.279349   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.279448   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.279592   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.279736   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.279847   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.280010   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.280224   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.280236   77486 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:29:02.384544   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:02.384569   77486 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:29:02.384579   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.387347   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.387757   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.387784   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.387993   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.388185   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.388319   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.388442   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.388649   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.388862   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.388877   77486 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:29:02.493993   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:29:02.494104   77486 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:29:02.494119   77486 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:29:02.494132   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.494363   77486 buildroot.go:166] provisioning hostname "flannel-761631"
	I1026 02:29:02.494387   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.494578   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.496840   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.497245   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.497280   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.497392   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.497573   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.497695   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.497840   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.498023   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.498238   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.498255   77486 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-761631 && echo "flannel-761631" | sudo tee /etc/hostname
	I1026 02:29:02.612400   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-761631
	
	I1026 02:29:02.612426   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.615521   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.615929   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.615962   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.616178   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.616337   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.616487   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.616591   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.616741   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:02.616965   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:02.616992   77486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-761631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-761631/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-761631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:29:02.734935   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:02.734970   77486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:29:02.734993   77486 buildroot.go:174] setting up certificates
	I1026 02:29:02.735002   77486 provision.go:84] configureAuth start
	I1026 02:29:02.735013   77486 main.go:141] libmachine: (flannel-761631) Calling .GetMachineName
	I1026 02:29:02.735299   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:02.738345   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.738760   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.738787   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.739045   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.741283   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.741629   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.741657   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.741780   77486 provision.go:143] copyHostCerts
	I1026 02:29:02.741839   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:29:02.741859   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:29:02.741964   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:29:02.742085   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:29:02.742093   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:29:02.742126   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:29:02.742226   77486 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:29:02.742234   77486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:29:02.742257   77486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:29:02.742320   77486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.flannel-761631 san=[127.0.0.1 192.168.61.248 flannel-761631 localhost minikube]
	I1026 02:29:02.913157   77486 provision.go:177] copyRemoteCerts
	I1026 02:29:02.913219   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:29:02.913243   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:02.916026   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.916413   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:02.916444   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:02.916681   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:02.916851   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:02.917045   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:02.917183   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.005367   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:29:03.030552   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1026 02:29:03.053798   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 02:29:03.082304   77486 provision.go:87] duration metric: took 347.290274ms to configureAuth
	I1026 02:29:03.082332   77486 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:29:03.082523   77486 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:03.082627   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.085726   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.086074   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.086112   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.086311   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.086514   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.086717   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.086862   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.087074   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:03.087297   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:03.087323   77486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:29:03.299261   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:29:03.299299   77486 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:29:03.299311   77486 main.go:141] libmachine: (flannel-761631) Calling .GetURL
	I1026 02:29:03.300717   77486 main.go:141] libmachine: (flannel-761631) DBG | Using libvirt version 6000000
	I1026 02:29:03.303302   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.303673   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.303718   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.303895   77486 main.go:141] libmachine: Docker is up and running!
	I1026 02:29:03.303909   77486 main.go:141] libmachine: Reticulating splines...
	I1026 02:29:03.303915   77486 client.go:171] duration metric: took 26.411935173s to LocalClient.Create
	I1026 02:29:03.303937   77486 start.go:167] duration metric: took 26.412005141s to libmachine.API.Create "flannel-761631"
	I1026 02:29:03.303946   77486 start.go:293] postStartSetup for "flannel-761631" (driver="kvm2")
	I1026 02:29:03.303965   77486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:29:03.303988   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.304217   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:29:03.304244   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.306504   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.306863   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.306891   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.307064   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.307246   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.307391   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.307554   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.388055   77486 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:29:03.392300   77486 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:29:03.392325   77486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:29:03.392386   77486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:29:03.392456   77486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:29:03.392538   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:29:03.401915   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:03.424503   77486 start.go:296] duration metric: took 120.523915ms for postStartSetup
	I1026 02:29:03.424551   77486 main.go:141] libmachine: (flannel-761631) Calling .GetConfigRaw
	I1026 02:29:03.425142   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:03.427498   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.427795   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.427818   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.428058   77486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/config.json ...
	I1026 02:29:03.428238   77486 start.go:128] duration metric: took 26.556076835s to createHost
	I1026 02:29:03.428265   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.430133   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.430448   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.430488   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.430639   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.430812   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.430944   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.431065   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.431257   77486 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:03.431451   77486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.248 22 <nil> <nil>}
	I1026 02:29:03.431464   77486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:29:03.537852   77486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909743.512612811
	
	I1026 02:29:03.537874   77486 fix.go:216] guest clock: 1729909743.512612811
	I1026 02:29:03.537881   77486 fix.go:229] Guest: 2024-10-26 02:29:03.512612811 +0000 UTC Remote: 2024-10-26 02:29:03.428253389 +0000 UTC m=+26.667896241 (delta=84.359422ms)
	I1026 02:29:03.537900   77486 fix.go:200] guest clock delta is within tolerance: 84.359422ms
	I1026 02:29:03.537905   77486 start.go:83] releasing machines lock for "flannel-761631", held for 26.665803633s
	I1026 02:29:03.537930   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.538175   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:03.540690   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.541080   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.541108   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.541252   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.541794   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.541985   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:03.542087   77486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:29:03.542120   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.542393   77486 ssh_runner.go:195] Run: cat /version.json
	I1026 02:29:03.542416   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:03.544843   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545212   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.545245   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545308   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545566   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.545733   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.545790   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:03.545819   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:03.545900   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.545961   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:03.546043   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.546074   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:03.546201   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:03.546317   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:03.655118   77486 ssh_runner.go:195] Run: systemctl --version
	I1026 02:29:03.661097   77486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:29:03.820803   77486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:29:03.827693   77486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:29:03.827765   77486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:29:03.843988   77486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:29:03.844011   77486 start.go:495] detecting cgroup driver to use...
	I1026 02:29:03.844082   77486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:29:03.860998   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:29:03.875158   77486 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:29:03.875218   77486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:29:03.888848   77486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:29:03.902570   77486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:29:04.031377   77486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:29:04.184250   77486 docker.go:233] disabling docker service ...
	I1026 02:29:04.184302   77486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:29:04.200026   77486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:29:04.212863   77486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:29:04.369442   77486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:29:04.485151   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:29:04.499036   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:29:04.518134   77486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:29:04.518202   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.527866   77486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:29:04.527960   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.538314   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.548172   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.558277   77486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:29:04.568312   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.578600   77486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.594896   77486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:04.605167   77486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:29:04.615577   77486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:29:04.615634   77486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:29:04.628647   77486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:29:04.639122   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:04.796937   77486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:29:04.900700   77486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:29:04.900770   77486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:29:04.905538   77486 start.go:563] Will wait 60s for crictl version
	I1026 02:29:04.905580   77486 ssh_runner.go:195] Run: which crictl
	I1026 02:29:04.908908   77486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:29:04.947058   77486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:29:04.947158   77486 ssh_runner.go:195] Run: crio --version
	I1026 02:29:04.983443   77486 ssh_runner.go:195] Run: crio --version
	I1026 02:29:05.014057   77486 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:29:05.015316   77486 main.go:141] libmachine: (flannel-761631) Calling .GetIP
	I1026 02:29:05.022973   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:05.023624   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:05.023653   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:05.023903   77486 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 02:29:05.031431   77486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:05.044361   77486 kubeadm.go:883] updating cluster {Name:flannel-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:29:05.044522   77486 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:29:05.044596   77486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:05.086779   77486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:29:05.086837   77486 ssh_runner.go:195] Run: which lz4
	I1026 02:29:05.090929   77486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:29:05.095066   77486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:29:05.095099   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:29:06.384582   77486 crio.go:462] duration metric: took 1.293706653s to copy over tarball
	I1026 02:29:06.384669   77486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:29:03.540342   79140 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1026 02:29:03.540543   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:03.540599   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:03.557622   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1026 02:29:03.558133   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:03.558727   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:03.558774   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:03.559132   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:03.559317   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:03.559462   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:03.559634   79140 start.go:159] libmachine.API.Create for "bridge-761631" (driver="kvm2")
	I1026 02:29:03.559665   79140 client.go:168] LocalClient.Create starting
	I1026 02:29:03.559703   79140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem
	I1026 02:29:03.559745   79140 main.go:141] libmachine: Decoding PEM data...
	I1026 02:29:03.559763   79140 main.go:141] libmachine: Parsing certificate...
	I1026 02:29:03.559842   79140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem
	I1026 02:29:03.559872   79140 main.go:141] libmachine: Decoding PEM data...
	I1026 02:29:03.559888   79140 main.go:141] libmachine: Parsing certificate...
	I1026 02:29:03.559917   79140 main.go:141] libmachine: Running pre-create checks...
	I1026 02:29:03.559929   79140 main.go:141] libmachine: (bridge-761631) Calling .PreCreateCheck
	I1026 02:29:03.560291   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:03.560740   79140 main.go:141] libmachine: Creating machine...
	I1026 02:29:03.560757   79140 main.go:141] libmachine: (bridge-761631) Calling .Create
	I1026 02:29:03.560908   79140 main.go:141] libmachine: (bridge-761631) Creating KVM machine...
	I1026 02:29:03.562333   79140 main.go:141] libmachine: (bridge-761631) DBG | found existing default KVM network
	I1026 02:29:03.563829   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.563645   79257 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:59:05} reservation:<nil>}
	I1026 02:29:03.565313   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.565241   79257 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034c0e0}
	I1026 02:29:03.565379   79140 main.go:141] libmachine: (bridge-761631) DBG | created network xml: 
	I1026 02:29:03.565394   79140 main.go:141] libmachine: (bridge-761631) DBG | <network>
	I1026 02:29:03.565404   79140 main.go:141] libmachine: (bridge-761631) DBG |   <name>mk-bridge-761631</name>
	I1026 02:29:03.565433   79140 main.go:141] libmachine: (bridge-761631) DBG |   <dns enable='no'/>
	I1026 02:29:03.565443   79140 main.go:141] libmachine: (bridge-761631) DBG |   
	I1026 02:29:03.565453   79140 main.go:141] libmachine: (bridge-761631) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1026 02:29:03.565465   79140 main.go:141] libmachine: (bridge-761631) DBG |     <dhcp>
	I1026 02:29:03.565487   79140 main.go:141] libmachine: (bridge-761631) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1026 02:29:03.565502   79140 main.go:141] libmachine: (bridge-761631) DBG |     </dhcp>
	I1026 02:29:03.565512   79140 main.go:141] libmachine: (bridge-761631) DBG |   </ip>
	I1026 02:29:03.565520   79140 main.go:141] libmachine: (bridge-761631) DBG |   
	I1026 02:29:03.565529   79140 main.go:141] libmachine: (bridge-761631) DBG | </network>
	I1026 02:29:03.565538   79140 main.go:141] libmachine: (bridge-761631) DBG | 
	I1026 02:29:03.571055   79140 main.go:141] libmachine: (bridge-761631) DBG | trying to create private KVM network mk-bridge-761631 192.168.50.0/24...
	I1026 02:29:03.641834   79140 main.go:141] libmachine: (bridge-761631) DBG | private KVM network mk-bridge-761631 192.168.50.0/24 created
	I1026 02:29:03.641864   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.641748   79257 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:29:03.641875   79140 main.go:141] libmachine: (bridge-761631) Setting up store path in /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 ...
	I1026 02:29:03.641899   79140 main.go:141] libmachine: (bridge-761631) Building disk image from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 02:29:03.641928   79140 main.go:141] libmachine: (bridge-761631) Downloading /home/jenkins/minikube-integration/19868-8680/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1026 02:29:03.893026   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.892917   79257 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa...
	I1026 02:29:03.982799   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.982663   79257 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/bridge-761631.rawdisk...
	I1026 02:29:03.982835   79140 main.go:141] libmachine: (bridge-761631) DBG | Writing magic tar header
	I1026 02:29:03.982849   79140 main.go:141] libmachine: (bridge-761631) DBG | Writing SSH key tar header
	I1026 02:29:03.982862   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:03.982781   79257 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 ...
	I1026 02:29:03.982939   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631
	I1026 02:29:03.982974   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube/machines
	I1026 02:29:03.982990   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631 (perms=drwx------)
	I1026 02:29:03.983010   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube/machines (perms=drwxr-xr-x)
	I1026 02:29:03.983023   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680/.minikube (perms=drwxr-xr-x)
	I1026 02:29:03.983035   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration/19868-8680 (perms=drwxrwxr-x)
	I1026 02:29:03.983048   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 02:29:03.983058   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 02:29:03.983070   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19868-8680
	I1026 02:29:03.983081   79140 main.go:141] libmachine: (bridge-761631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 02:29:03.983095   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1026 02:29:03.983109   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home/jenkins
	I1026 02:29:03.983120   79140 main.go:141] libmachine: (bridge-761631) DBG | Checking permissions on dir: /home
	I1026 02:29:03.983133   79140 main.go:141] libmachine: (bridge-761631) DBG | Skipping /home - not owner
	I1026 02:29:03.983148   79140 main.go:141] libmachine: (bridge-761631) Creating domain...
	I1026 02:29:03.984253   79140 main.go:141] libmachine: (bridge-761631) define libvirt domain using xml: 
	I1026 02:29:03.984280   79140 main.go:141] libmachine: (bridge-761631) <domain type='kvm'>
	I1026 02:29:03.984290   79140 main.go:141] libmachine: (bridge-761631)   <name>bridge-761631</name>
	I1026 02:29:03.984301   79140 main.go:141] libmachine: (bridge-761631)   <memory unit='MiB'>3072</memory>
	I1026 02:29:03.984310   79140 main.go:141] libmachine: (bridge-761631)   <vcpu>2</vcpu>
	I1026 02:29:03.984314   79140 main.go:141] libmachine: (bridge-761631)   <features>
	I1026 02:29:03.984319   79140 main.go:141] libmachine: (bridge-761631)     <acpi/>
	I1026 02:29:03.984324   79140 main.go:141] libmachine: (bridge-761631)     <apic/>
	I1026 02:29:03.984330   79140 main.go:141] libmachine: (bridge-761631)     <pae/>
	I1026 02:29:03.984336   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984341   79140 main.go:141] libmachine: (bridge-761631)   </features>
	I1026 02:29:03.984351   79140 main.go:141] libmachine: (bridge-761631)   <cpu mode='host-passthrough'>
	I1026 02:29:03.984385   79140 main.go:141] libmachine: (bridge-761631)   
	I1026 02:29:03.984403   79140 main.go:141] libmachine: (bridge-761631)   </cpu>
	I1026 02:29:03.984430   79140 main.go:141] libmachine: (bridge-761631)   <os>
	I1026 02:29:03.984451   79140 main.go:141] libmachine: (bridge-761631)     <type>hvm</type>
	I1026 02:29:03.984464   79140 main.go:141] libmachine: (bridge-761631)     <boot dev='cdrom'/>
	I1026 02:29:03.984475   79140 main.go:141] libmachine: (bridge-761631)     <boot dev='hd'/>
	I1026 02:29:03.984486   79140 main.go:141] libmachine: (bridge-761631)     <bootmenu enable='no'/>
	I1026 02:29:03.984509   79140 main.go:141] libmachine: (bridge-761631)   </os>
	I1026 02:29:03.984518   79140 main.go:141] libmachine: (bridge-761631)   <devices>
	I1026 02:29:03.984530   79140 main.go:141] libmachine: (bridge-761631)     <disk type='file' device='cdrom'>
	I1026 02:29:03.984546   79140 main.go:141] libmachine: (bridge-761631)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/boot2docker.iso'/>
	I1026 02:29:03.984558   79140 main.go:141] libmachine: (bridge-761631)       <target dev='hdc' bus='scsi'/>
	I1026 02:29:03.984569   79140 main.go:141] libmachine: (bridge-761631)       <readonly/>
	I1026 02:29:03.984580   79140 main.go:141] libmachine: (bridge-761631)     </disk>
	I1026 02:29:03.984588   79140 main.go:141] libmachine: (bridge-761631)     <disk type='file' device='disk'>
	I1026 02:29:03.984608   79140 main.go:141] libmachine: (bridge-761631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1026 02:29:03.984626   79140 main.go:141] libmachine: (bridge-761631)       <source file='/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/bridge-761631.rawdisk'/>
	I1026 02:29:03.984639   79140 main.go:141] libmachine: (bridge-761631)       <target dev='hda' bus='virtio'/>
	I1026 02:29:03.984649   79140 main.go:141] libmachine: (bridge-761631)     </disk>
	I1026 02:29:03.984659   79140 main.go:141] libmachine: (bridge-761631)     <interface type='network'>
	I1026 02:29:03.984677   79140 main.go:141] libmachine: (bridge-761631)       <source network='mk-bridge-761631'/>
	I1026 02:29:03.984686   79140 main.go:141] libmachine: (bridge-761631)       <model type='virtio'/>
	I1026 02:29:03.984707   79140 main.go:141] libmachine: (bridge-761631)     </interface>
	I1026 02:29:03.984723   79140 main.go:141] libmachine: (bridge-761631)     <interface type='network'>
	I1026 02:29:03.984734   79140 main.go:141] libmachine: (bridge-761631)       <source network='default'/>
	I1026 02:29:03.984746   79140 main.go:141] libmachine: (bridge-761631)       <model type='virtio'/>
	I1026 02:29:03.984759   79140 main.go:141] libmachine: (bridge-761631)     </interface>
	I1026 02:29:03.984771   79140 main.go:141] libmachine: (bridge-761631)     <serial type='pty'>
	I1026 02:29:03.984782   79140 main.go:141] libmachine: (bridge-761631)       <target port='0'/>
	I1026 02:29:03.984793   79140 main.go:141] libmachine: (bridge-761631)     </serial>
	I1026 02:29:03.984803   79140 main.go:141] libmachine: (bridge-761631)     <console type='pty'>
	I1026 02:29:03.984814   79140 main.go:141] libmachine: (bridge-761631)       <target type='serial' port='0'/>
	I1026 02:29:03.984823   79140 main.go:141] libmachine: (bridge-761631)     </console>
	I1026 02:29:03.984848   79140 main.go:141] libmachine: (bridge-761631)     <rng model='virtio'>
	I1026 02:29:03.984866   79140 main.go:141] libmachine: (bridge-761631)       <backend model='random'>/dev/random</backend>
	I1026 02:29:03.984879   79140 main.go:141] libmachine: (bridge-761631)     </rng>
	I1026 02:29:03.984888   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984897   79140 main.go:141] libmachine: (bridge-761631)     
	I1026 02:29:03.984907   79140 main.go:141] libmachine: (bridge-761631)   </devices>
	I1026 02:29:03.984919   79140 main.go:141] libmachine: (bridge-761631) </domain>
	I1026 02:29:03.984925   79140 main.go:141] libmachine: (bridge-761631) 
	I1026 02:29:03.990144   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:f8:6c:2f in network default
	I1026 02:29:03.990722   79140 main.go:141] libmachine: (bridge-761631) Ensuring networks are active...
	I1026 02:29:03.990771   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:03.991439   79140 main.go:141] libmachine: (bridge-761631) Ensuring network default is active
	I1026 02:29:03.991787   79140 main.go:141] libmachine: (bridge-761631) Ensuring network mk-bridge-761631 is active
	I1026 02:29:03.992334   79140 main.go:141] libmachine: (bridge-761631) Getting domain xml...
	I1026 02:29:03.993137   79140 main.go:141] libmachine: (bridge-761631) Creating domain...
	I1026 02:29:05.398126   79140 main.go:141] libmachine: (bridge-761631) Waiting to get IP...
	I1026 02:29:05.399080   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.399572   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.399599   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.399546   79257 retry.go:31] will retry after 209.544491ms: waiting for machine to come up
	I1026 02:29:05.611703   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.614223   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.614254   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.614128   79257 retry.go:31] will retry after 236.803159ms: waiting for machine to come up
	I1026 02:29:05.852793   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:05.853468   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:05.853493   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:05.853338   79257 retry.go:31] will retry after 403.786232ms: waiting for machine to come up
	I1026 02:29:06.259139   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:06.259801   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:06.259825   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:06.259763   79257 retry.go:31] will retry after 468.969978ms: waiting for machine to come up
	I1026 02:29:06.730685   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:06.731406   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:06.731439   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:06.731358   79257 retry.go:31] will retry after 592.815717ms: waiting for machine to come up
	I1026 02:29:08.766592   77486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.3818896s)
	I1026 02:29:08.766622   77486 crio.go:469] duration metric: took 2.382010529s to extract the tarball
	I1026 02:29:08.766632   77486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:29:08.806348   77486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:08.854169   77486 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:29:08.854198   77486 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:29:08.854208   77486 kubeadm.go:934] updating node { 192.168.61.248 8443 v1.31.2 crio true true} ...
	I1026 02:29:08.854321   77486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-761631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1026 02:29:08.854406   77486 ssh_runner.go:195] Run: crio config
	I1026 02:29:08.916328   77486 cni.go:84] Creating CNI manager for "flannel"
	I1026 02:29:08.916352   77486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:29:08.916375   77486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.248 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-761631 NodeName:flannel-761631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:29:08.916526   77486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-761631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.248"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.248"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:29:08.916582   77486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:29:08.926749   77486 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:29:08.926808   77486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:29:08.935370   77486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1026 02:29:08.960660   77486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:29:08.976856   77486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1026 02:29:08.993075   77486 ssh_runner.go:195] Run: grep 192.168.61.248	control-plane.minikube.internal$ /etc/hosts
	I1026 02:29:08.996750   77486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:09.009318   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:09.152886   77486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:09.172962   77486 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631 for IP: 192.168.61.248
	I1026 02:29:09.172984   77486 certs.go:194] generating shared ca certs ...
	I1026 02:29:09.173004   77486 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.173163   77486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:29:09.173221   77486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:29:09.173233   77486 certs.go:256] generating profile certs ...
	I1026 02:29:09.173299   77486 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key
	I1026 02:29:09.173315   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt with IP's: []
	I1026 02:29:09.340952   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt ...
	I1026 02:29:09.340985   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.crt: {Name:mk60fd82ad62306bfc219fc9d355b470e6d5fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.341321   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key ...
	I1026 02:29:09.341346   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/client.key: {Name:mkd499e20bc992f3b2dc2fb5764fdc851cf3ca5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.342116   77486 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253
	I1026 02:29:09.342141   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.248]
	I1026 02:29:09.413853   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 ...
	I1026 02:29:09.413879   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253: {Name:mk583e07ca25dcda4e47e41be43863a944cbb66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.414033   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253 ...
	I1026 02:29:09.414048   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253: {Name:mk94450c712e1ffcb37d68122bde08f681bf9f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.414142   77486 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt.e4a97253 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt
	I1026 02:29:09.414248   77486 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key.e4a97253 -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key
	I1026 02:29:09.414330   77486 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key
	I1026 02:29:09.414349   77486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt with IP's: []
	I1026 02:29:09.552525   77486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt ...
	I1026 02:29:09.552552   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt: {Name:mk35e307ae13de04b93087b17c0414e37720490b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.552739   77486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key ...
	I1026 02:29:09.552752   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key: {Name:mk2d7c24cb98b88b8f1e364eed062e2b83bf86cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:09.552963   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:29:09.553004   77486 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:29:09.553015   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:29:09.553036   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:29:09.553059   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:29:09.553081   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:29:09.553119   77486 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:09.553774   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:29:09.580676   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:29:09.604867   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:29:09.631527   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:29:09.662390   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 02:29:09.689868   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 02:29:09.717846   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:29:09.746298   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/flannel-761631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 02:29:09.772659   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:29:09.797712   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:29:09.823043   77486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:29:09.854285   77486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:29:09.878022   77486 ssh_runner.go:195] Run: openssl version
	I1026 02:29:09.885068   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:29:09.903443   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.910261   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.910309   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:29:09.918367   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:29:09.937816   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:29:09.949781   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.954565   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.954631   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:09.960651   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:29:09.971689   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:29:09.982432   77486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.986796   77486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.986861   77486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:29:09.992646   77486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:29:10.003297   77486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:29:10.007036   77486 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 02:29:10.007084   77486 kubeadm.go:392] StartCluster: {Name:flannel-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:flannel-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:29:10.007146   77486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:29:10.007183   77486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:29:10.051824   77486 cri.go:89] found id: ""
	I1026 02:29:10.051920   77486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:29:10.062287   77486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:29:10.075721   77486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:29:10.089939   77486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:29:10.089966   77486 kubeadm.go:157] found existing configuration files:
	
	I1026 02:29:10.090018   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:29:10.100358   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:29:10.100422   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:29:10.112799   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:29:10.124510   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:29:10.124578   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:29:10.137012   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:29:10.148800   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:29:10.148854   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:29:10.161156   77486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:29:10.171960   77486 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:29:10.172035   77486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:29:10.183113   77486 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:29:10.238771   77486 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:29:10.238873   77486 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:29:10.363978   77486 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:29:10.364113   77486 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:29:10.364233   77486 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:29:10.375354   77486 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:29:10.453508   77486 out.go:235]   - Generating certificates and keys ...
	I1026 02:29:10.453626   77486 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:29:10.453712   77486 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:29:10.505380   77486 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 02:29:10.607383   77486 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 02:29:10.985093   77486 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 02:29:11.090154   77486 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 02:29:11.220927   77486 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 02:29:11.221130   77486 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-761631 localhost] and IPs [192.168.61.248 127.0.0.1 ::1]
	I1026 02:29:11.561401   77486 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 02:29:11.561650   77486 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-761631 localhost] and IPs [192.168.61.248 127.0.0.1 ::1]
	I1026 02:29:11.633523   77486 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 02:29:11.784291   77486 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 02:29:07.326305   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:07.327055   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:07.327086   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:07.327020   79257 retry.go:31] will retry after 588.834605ms: waiting for machine to come up
	I1026 02:29:07.917851   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:07.918379   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:07.918408   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:07.918325   79257 retry.go:31] will retry after 853.665263ms: waiting for machine to come up
	I1026 02:29:08.773683   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:08.774257   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:08.774284   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:08.774213   79257 retry.go:31] will retry after 1.370060539s: waiting for machine to come up
	I1026 02:29:10.146643   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:10.147145   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:10.147173   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:10.147109   79257 retry.go:31] will retry after 1.521712642s: waiting for machine to come up
	I1026 02:29:11.670458   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:11.670928   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:11.670955   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:11.670888   79257 retry.go:31] will retry after 1.580274021s: waiting for machine to come up
	I1026 02:29:12.001325   77486 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 02:29:12.001578   77486 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:29:12.143567   77486 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:29:12.239025   77486 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:29:12.472872   77486 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:29:12.906638   77486 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:29:13.057013   77486 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:29:13.057808   77486 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:29:13.060204   77486 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:29:13.062029   77486 out.go:235]   - Booting up control plane ...
	I1026 02:29:13.062147   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:29:13.062254   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:29:13.062364   77486 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:29:13.087233   77486 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:29:13.094292   77486 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:29:13.094459   77486 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:29:13.262679   77486 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:29:13.262844   77486 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:29:13.764852   77486 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.822984ms
	I1026 02:29:13.764970   77486 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:29:13.252346   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:13.252785   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:13.252813   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:13.252741   79257 retry.go:31] will retry after 2.501165629s: waiting for machine to come up
	I1026 02:29:15.755812   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:15.756157   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:15.756178   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:15.756105   79257 retry.go:31] will retry after 3.067156454s: waiting for machine to come up
	I1026 02:29:19.262401   77486 kubeadm.go:310] [api-check] The API server is healthy after 5.501282988s
	I1026 02:29:19.276398   77486 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 02:29:19.294112   77486 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 02:29:19.320353   77486 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 02:29:19.320636   77486 kubeadm.go:310] [mark-control-plane] Marking the node flannel-761631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 02:29:19.331623   77486 kubeadm.go:310] [bootstrap-token] Using token: igjkpp.i376pagzjlp08yff
	I1026 02:29:19.332835   77486 out.go:235]   - Configuring RBAC rules ...
	I1026 02:29:19.332973   77486 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 02:29:19.339298   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 02:29:19.348141   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 02:29:19.352106   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 02:29:19.361610   77486 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 02:29:19.366790   77486 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 02:29:19.669645   77486 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 02:29:20.102403   77486 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 02:29:20.669559   77486 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 02:29:20.671428   77486 kubeadm.go:310] 
	I1026 02:29:20.671493   77486 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 02:29:20.671500   77486 kubeadm.go:310] 
	I1026 02:29:20.671583   77486 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 02:29:20.671594   77486 kubeadm.go:310] 
	I1026 02:29:20.671619   77486 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 02:29:20.671676   77486 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 02:29:20.671744   77486 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 02:29:20.671753   77486 kubeadm.go:310] 
	I1026 02:29:20.671798   77486 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 02:29:20.671804   77486 kubeadm.go:310] 
	I1026 02:29:20.671843   77486 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 02:29:20.671850   77486 kubeadm.go:310] 
	I1026 02:29:20.671892   77486 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 02:29:20.672016   77486 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 02:29:20.672105   77486 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 02:29:20.672132   77486 kubeadm.go:310] 
	I1026 02:29:20.672250   77486 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 02:29:20.672359   77486 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 02:29:20.672370   77486 kubeadm.go:310] 
	I1026 02:29:20.672460   77486 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token igjkpp.i376pagzjlp08yff \
	I1026 02:29:20.672568   77486 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 02:29:20.672611   77486 kubeadm.go:310] 	--control-plane 
	I1026 02:29:20.672622   77486 kubeadm.go:310] 
	I1026 02:29:20.672737   77486 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 02:29:20.672746   77486 kubeadm.go:310] 
	I1026 02:29:20.672843   77486 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token igjkpp.i376pagzjlp08yff \
	I1026 02:29:20.672961   77486 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 02:29:20.673835   77486 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:29:20.673856   77486 cni.go:84] Creating CNI manager for "flannel"
	I1026 02:29:20.675502   77486 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1026 02:29:20.676788   77486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 02:29:20.683988   77486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 02:29:20.684010   77486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1026 02:29:20.700524   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 02:29:21.081090   77486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:29:21.081142   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:21.081181   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-761631 minikube.k8s.io/updated_at=2024_10_26T02_29_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=flannel-761631 minikube.k8s.io/primary=true
	I1026 02:29:21.118686   77486 ops.go:34] apiserver oom_adj: -16
	I1026 02:29:21.257435   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:21.758243   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:18.825123   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:18.825724   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:18.825750   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:18.825657   79257 retry.go:31] will retry after 3.727894276s: waiting for machine to come up
	I1026 02:29:22.258398   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:22.757544   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:23.258231   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:23.758129   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:24.258255   77486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:24.339513   77486 kubeadm.go:1113] duration metric: took 3.258424764s to wait for elevateKubeSystemPrivileges
	I1026 02:29:24.339545   77486 kubeadm.go:394] duration metric: took 14.332464563s to StartCluster
	I1026 02:29:24.339561   77486 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:24.339635   77486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:29:24.340556   77486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:24.340779   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 02:29:24.340778   77486 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.248 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:24.340801   77486 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:29:24.340885   77486 addons.go:69] Setting storage-provisioner=true in profile "flannel-761631"
	I1026 02:29:24.340957   77486 addons.go:234] Setting addon storage-provisioner=true in "flannel-761631"
	I1026 02:29:24.340981   77486 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:24.340992   77486 host.go:66] Checking if "flannel-761631" exists ...
	I1026 02:29:24.340898   77486 addons.go:69] Setting default-storageclass=true in profile "flannel-761631"
	I1026 02:29:24.341042   77486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-761631"
	I1026 02:29:24.341453   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.341474   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.341495   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.341510   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.342409   77486 out.go:177] * Verifying Kubernetes components...
	I1026 02:29:24.343888   77486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:24.356991   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I1026 02:29:24.357257   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I1026 02:29:24.357505   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.357709   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.358102   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.358128   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.358220   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.358237   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.358485   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.358522   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.358661   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.358997   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.359041   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.361873   77486 addons.go:234] Setting addon default-storageclass=true in "flannel-761631"
	I1026 02:29:24.361912   77486 host.go:66] Checking if "flannel-761631" exists ...
	I1026 02:29:24.362258   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.362296   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.373806   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32855
	I1026 02:29:24.374321   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.374881   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.374918   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.375198   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.375389   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.377236   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:24.378531   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I1026 02:29:24.378972   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.378979   77486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:29:24.379404   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.379426   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.379718   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.380127   77486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:24.380134   77486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:24.380142   77486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:29:24.380193   77486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:24.380263   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:24.383226   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.383664   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:24.383691   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.383955   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:24.384123   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:24.384287   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:24.384398   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:24.395245   77486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I1026 02:29:24.395646   77486 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:24.396110   77486 main.go:141] libmachine: Using API Version  1
	I1026 02:29:24.396129   77486 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:24.396426   77486 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:24.396588   77486 main.go:141] libmachine: (flannel-761631) Calling .GetState
	I1026 02:29:24.397917   77486 main.go:141] libmachine: (flannel-761631) Calling .DriverName
	I1026 02:29:24.398103   77486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:24.398119   77486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:29:24.398138   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHHostname
	I1026 02:29:24.400434   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.400852   77486 main.go:141] libmachine: (flannel-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ad:74", ip: ""} in network mk-flannel-761631: {Iface:virbr3 ExpiryTime:2024-10-26 03:28:51 +0000 UTC Type:0 Mac:52:54:00:e1:ad:74 Iaid: IPaddr:192.168.61.248 Prefix:24 Hostname:flannel-761631 Clientid:01:52:54:00:e1:ad:74}
	I1026 02:29:24.400877   77486 main.go:141] libmachine: (flannel-761631) DBG | domain flannel-761631 has defined IP address 192.168.61.248 and MAC address 52:54:00:e1:ad:74 in network mk-flannel-761631
	I1026 02:29:24.401040   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHPort
	I1026 02:29:24.401224   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHKeyPath
	I1026 02:29:24.401361   77486 main.go:141] libmachine: (flannel-761631) Calling .GetSSHUsername
	I1026 02:29:24.401496   77486 sshutil.go:53] new ssh client: &{IP:192.168.61.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/flannel-761631/id_rsa Username:docker}
	I1026 02:29:24.549354   77486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:24.549551   77486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 02:29:24.566375   77486 node_ready.go:35] waiting up to 15m0s for node "flannel-761631" to be "Ready" ...
	I1026 02:29:24.656359   77486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:24.711281   77486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:25.005762   77486 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1026 02:29:25.455191   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455260   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455226   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455352   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455623   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455636   77486 main.go:141] libmachine: (flannel-761631) DBG | Closing plugin on server side
	I1026 02:29:25.455640   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.455656   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455675   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455679   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.455686   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455691   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.455701   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.455902   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.455925   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.456009   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.456022   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.456059   77486 main.go:141] libmachine: (flannel-761631) DBG | Closing plugin on server side
	I1026 02:29:25.466730   77486 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:25.466753   77486 main.go:141] libmachine: (flannel-761631) Calling .Close
	I1026 02:29:25.467021   77486 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:25.467037   77486 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:25.468528   77486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1026 02:29:25.469503   77486 addons.go:510] duration metric: took 1.128704492s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 02:29:25.510240   77486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-761631" context rescaled to 1 replicas
	I1026 02:29:26.569270   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:22.554616   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:22.555173   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find current IP address of domain bridge-761631 in network mk-bridge-761631
	I1026 02:29:22.555199   79140 main.go:141] libmachine: (bridge-761631) DBG | I1026 02:29:22.555136   79257 retry.go:31] will retry after 5.242559388s: waiting for machine to come up
	I1026 02:29:27.799416   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.799964   79140 main.go:141] libmachine: (bridge-761631) Found IP for machine: 192.168.50.234
	I1026 02:29:27.799998   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has current primary IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.800007   79140 main.go:141] libmachine: (bridge-761631) Reserving static IP address...
	I1026 02:29:27.800402   79140 main.go:141] libmachine: (bridge-761631) DBG | unable to find host DHCP lease matching {name: "bridge-761631", mac: "52:54:00:62:c2:12", ip: "192.168.50.234"} in network mk-bridge-761631
	I1026 02:29:27.878412   79140 main.go:141] libmachine: (bridge-761631) DBG | Getting to WaitForSSH function...
	I1026 02:29:27.878453   79140 main.go:141] libmachine: (bridge-761631) Reserved static IP address: 192.168.50.234
	I1026 02:29:27.878467   79140 main.go:141] libmachine: (bridge-761631) Waiting for SSH to be available...
	I1026 02:29:27.881553   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.882058   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:27.882088   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:27.882238   79140 main.go:141] libmachine: (bridge-761631) DBG | Using SSH client type: external
	I1026 02:29:27.882266   79140 main.go:141] libmachine: (bridge-761631) DBG | Using SSH private key: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa (-rw-------)
	I1026 02:29:27.882294   79140 main.go:141] libmachine: (bridge-761631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1026 02:29:27.882325   79140 main.go:141] libmachine: (bridge-761631) DBG | About to run SSH command:
	I1026 02:29:27.882337   79140 main.go:141] libmachine: (bridge-761631) DBG | exit 0
	I1026 02:29:28.009463   79140 main.go:141] libmachine: (bridge-761631) DBG | SSH cmd err, output: <nil>: 
	I1026 02:29:28.009733   79140 main.go:141] libmachine: (bridge-761631) KVM machine creation complete!
	I1026 02:29:28.010053   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:28.010540   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.010723   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.010878   79140 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1026 02:29:28.010891   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:28.012129   79140 main.go:141] libmachine: Detecting operating system of created instance...
	I1026 02:29:28.012143   79140 main.go:141] libmachine: Waiting for SSH to be available...
	I1026 02:29:28.012149   79140 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 02:29:28.012164   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.014418   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.014769   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.014795   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.014961   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.015109   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.015246   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.015358   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.015470   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.015657   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.015667   79140 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 02:29:28.120712   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:28.120741   79140 main.go:141] libmachine: Detecting the provisioner...
	I1026 02:29:28.120749   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.123722   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.124062   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.124088   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.124294   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.124490   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.124640   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.124763   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.124922   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.125173   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.125188   79140 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1026 02:29:28.233890   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1026 02:29:28.233981   79140 main.go:141] libmachine: found compatible host: buildroot
	I1026 02:29:28.233994   79140 main.go:141] libmachine: Provisioning with buildroot...
	I1026 02:29:28.234006   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.234260   79140 buildroot.go:166] provisioning hostname "bridge-761631"
	I1026 02:29:28.234288   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.234470   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.236972   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.237358   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.237385   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.237545   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.237702   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.237848   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.237977   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.238127   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.238336   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.238348   79140 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-761631 && echo "bridge-761631" | sudo tee /etc/hostname
	I1026 02:29:28.358934   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-761631
	
	I1026 02:29:28.358968   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.361630   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.361980   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.362006   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.362152   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.362336   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.362488   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.362601   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.362925   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.363138   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.363154   79140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-761631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-761631/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-761631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 02:29:28.483387   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 02:29:28.483413   79140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19868-8680/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-8680/.minikube}
	I1026 02:29:28.483466   79140 buildroot.go:174] setting up certificates
	I1026 02:29:28.483478   79140 provision.go:84] configureAuth start
	I1026 02:29:28.483488   79140 main.go:141] libmachine: (bridge-761631) Calling .GetMachineName
	I1026 02:29:28.483738   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:28.486204   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.486517   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.486554   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.486692   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.488771   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.489078   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.489113   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.489181   79140 provision.go:143] copyHostCerts
	I1026 02:29:28.489267   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem, removing ...
	I1026 02:29:28.489283   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem
	I1026 02:29:28.489350   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/key.pem (1679 bytes)
	I1026 02:29:28.489492   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem, removing ...
	I1026 02:29:28.489501   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem
	I1026 02:29:28.489531   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/ca.pem (1082 bytes)
	I1026 02:29:28.489618   79140 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem, removing ...
	I1026 02:29:28.489627   79140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem
	I1026 02:29:28.489654   79140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-8680/.minikube/cert.pem (1123 bytes)
	I1026 02:29:28.489738   79140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem org=jenkins.bridge-761631 san=[127.0.0.1 192.168.50.234 bridge-761631 localhost minikube]
	I1026 02:29:28.606055   79140 provision.go:177] copyRemoteCerts
	I1026 02:29:28.606128   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 02:29:28.606157   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.608894   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.609268   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.609294   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.609532   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.609700   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.609832   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.609925   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:28.695542   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 02:29:28.719597   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 02:29:28.741478   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 02:29:28.762510   79140 provision.go:87] duration metric: took 279.018391ms to configureAuth
	I1026 02:29:28.762541   79140 buildroot.go:189] setting minikube options for container-runtime
	I1026 02:29:28.762714   79140 config.go:182] Loaded profile config "bridge-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:28.762780   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.765305   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.765735   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.765769   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.765907   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.766068   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.766220   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.766347   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.766500   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:28.766707   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:28.766723   79140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 02:29:28.990952   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 02:29:28.990986   79140 main.go:141] libmachine: Checking connection to Docker...
	I1026 02:29:28.990996   79140 main.go:141] libmachine: (bridge-761631) Calling .GetURL
	I1026 02:29:28.992009   79140 main.go:141] libmachine: (bridge-761631) DBG | Using libvirt version 6000000
	I1026 02:29:28.994355   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.994667   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.994708   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.994861   79140 main.go:141] libmachine: Docker is up and running!
	I1026 02:29:28.994877   79140 main.go:141] libmachine: Reticulating splines...
	I1026 02:29:28.994883   79140 client.go:171] duration metric: took 25.435212479s to LocalClient.Create
	I1026 02:29:28.994904   79140 start.go:167] duration metric: took 25.435274209s to libmachine.API.Create "bridge-761631"
	I1026 02:29:28.994911   79140 start.go:293] postStartSetup for "bridge-761631" (driver="kvm2")
	I1026 02:29:28.994929   79140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 02:29:28.994946   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:28.995173   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 02:29:28.995201   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:28.997253   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.997615   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:28.997644   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:28.997817   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:28.997978   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:28.998112   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:28.998248   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.083262   79140 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 02:29:29.087047   79140 info.go:137] Remote host: Buildroot 2023.02.9
	I1026 02:29:29.087079   79140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/addons for local assets ...
	I1026 02:29:29.087151   79140 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-8680/.minikube/files for local assets ...
	I1026 02:29:29.087269   79140 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem -> 176152.pem in /etc/ssl/certs
	I1026 02:29:29.087386   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 02:29:29.096639   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:29.119723   79140 start.go:296] duration metric: took 124.795288ms for postStartSetup
	I1026 02:29:29.119781   79140 main.go:141] libmachine: (bridge-761631) Calling .GetConfigRaw
	I1026 02:29:29.120466   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:29.123262   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.123663   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.123690   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.123909   79140 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/config.json ...
	I1026 02:29:29.124105   79140 start.go:128] duration metric: took 25.58589441s to createHost
	I1026 02:29:29.124126   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.126058   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.126404   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.126425   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.126610   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.126763   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.126888   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.127003   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.127167   79140 main.go:141] libmachine: Using SSH client type: native
	I1026 02:29:29.127326   79140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I1026 02:29:29.127336   79140 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 02:29:29.238917   79140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729909769.216503060
	
	I1026 02:29:29.238942   79140 fix.go:216] guest clock: 1729909769.216503060
	I1026 02:29:29.238952   79140 fix.go:229] Guest: 2024-10-26 02:29:29.21650306 +0000 UTC Remote: 2024-10-26 02:29:29.124116784 +0000 UTC m=+32.306517015 (delta=92.386276ms)
	I1026 02:29:29.238985   79140 fix.go:200] guest clock delta is within tolerance: 92.386276ms
	I1026 02:29:29.238990   79140 start.go:83] releasing machines lock for "bridge-761631", held for 25.700950423s
	I1026 02:29:29.239006   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.239266   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:29.242242   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.242613   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.242640   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.242845   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243277   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243435   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:29.243545   79140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 02:29:29.243589   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.243691   79140 ssh_runner.go:195] Run: cat /version.json
	I1026 02:29:29.243715   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:29.246033   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246357   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246378   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.246409   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246550   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.246696   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.246822   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:29.246849   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:29.246866   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.247016   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.247046   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:29.247207   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:29.247368   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:29.247488   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:29.356403   79140 ssh_runner.go:195] Run: systemctl --version
	I1026 02:29:29.362585   79140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 02:29:29.519312   79140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 02:29:29.524546   79140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 02:29:29.524614   79140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 02:29:29.540036   79140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 02:29:29.540062   79140 start.go:495] detecting cgroup driver to use...
	I1026 02:29:29.540119   79140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 02:29:29.555431   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 02:29:29.568499   79140 docker.go:217] disabling cri-docker service (if available) ...
	I1026 02:29:29.568557   79140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 02:29:29.581888   79140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 02:29:29.594423   79140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 02:29:29.708968   79140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 02:29:29.887103   79140 docker.go:233] disabling docker service ...
	I1026 02:29:29.887184   79140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 02:29:29.902996   79140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 02:29:29.917323   79140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 02:29:30.055076   79140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 02:29:30.176154   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 02:29:30.191040   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 02:29:30.214132   79140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 02:29:30.214183   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.225094   79140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 02:29:30.225158   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.235756   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.245621   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.256117   79140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 02:29:30.269296   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.280377   79140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.300325   79140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 02:29:30.310664   79140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 02:29:30.321276   79140 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 02:29:30.321337   79140 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 02:29:30.335449   79140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 02:29:30.344654   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:30.477091   79140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 02:29:30.560664   79140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 02:29:30.560746   79140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 02:29:30.565026   79140 start.go:563] Will wait 60s for crictl version
	I1026 02:29:30.565078   79140 ssh_runner.go:195] Run: which crictl
	I1026 02:29:30.568710   79140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 02:29:30.611177   79140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 02:29:30.611273   79140 ssh_runner.go:195] Run: crio --version
	I1026 02:29:30.641197   79140 ssh_runner.go:195] Run: crio --version
	I1026 02:29:30.675790   79140 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1026 02:29:28.571391   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:31.069756   77486 node_ready.go:53] node "flannel-761631" has status "Ready":"False"
	I1026 02:29:30.676943   79140 main.go:141] libmachine: (bridge-761631) Calling .GetIP
	I1026 02:29:30.679671   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:30.680030   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:30.680093   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:30.680233   79140 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1026 02:29:30.684283   79140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:30.696184   79140 kubeadm.go:883] updating cluster {Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 02:29:30.696294   79140 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 02:29:30.696345   79140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:30.730209   79140 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1026 02:29:30.730280   79140 ssh_runner.go:195] Run: which lz4
	I1026 02:29:30.733919   79140 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 02:29:30.737681   79140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 02:29:30.737710   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1026 02:29:33.070906   77486 node_ready.go:49] node "flannel-761631" has status "Ready":"True"
	I1026 02:29:33.070944   77486 node_ready.go:38] duration metric: took 8.504544013s for node "flannel-761631" to be "Ready" ...
	I1026 02:29:33.070957   77486 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:33.080658   77486 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:35.090267   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:32.017369   79140 crio.go:462] duration metric: took 1.28349159s to copy over tarball
	I1026 02:29:32.017500   79140 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 02:29:34.236021   79140 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218492472s)
	I1026 02:29:34.236046   79140 crio.go:469] duration metric: took 2.218648878s to extract the tarball
	I1026 02:29:34.236053   79140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 02:29:34.271451   79140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 02:29:34.312396   79140 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 02:29:34.312423   79140 cache_images.go:84] Images are preloaded, skipping loading
	I1026 02:29:34.312433   79140 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.2 crio true true} ...
	I1026 02:29:34.312539   79140 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-761631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1026 02:29:34.312621   79140 ssh_runner.go:195] Run: crio config
	I1026 02:29:34.361402   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:29:34.361449   79140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 02:29:34.361476   79140 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-761631 NodeName:bridge-761631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 02:29:34.361620   79140 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-761631"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 02:29:34.361691   79140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 02:29:34.374250   79140 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 02:29:34.374322   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 02:29:34.384039   79140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 02:29:34.403170   79140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 02:29:34.421890   79140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1026 02:29:34.438089   79140 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I1026 02:29:34.442189   79140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 02:29:34.454244   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:34.578036   79140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:34.597007   79140 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631 for IP: 192.168.50.234
	I1026 02:29:34.597035   79140 certs.go:194] generating shared ca certs ...
	I1026 02:29:34.597055   79140 certs.go:226] acquiring lock for ca certs: {Name:mk60355c56273f3f70d3fac7385f027c309d4a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.597240   79140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key
	I1026 02:29:34.597297   79140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key
	I1026 02:29:34.597310   79140 certs.go:256] generating profile certs ...
	I1026 02:29:34.597381   79140 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key
	I1026 02:29:34.597400   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt with IP's: []
	I1026 02:29:34.741373   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt ...
	I1026 02:29:34.741401   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.crt: {Name:mkc4cd4d1bccd5089183954b26279211f5d756cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.741586   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key ...
	I1026 02:29:34.741598   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/client.key: {Name:mkdc10d78a03b28651203ac3496bcd643469f528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.741677   79140 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd
	I1026 02:29:34.741692   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.234]
	I1026 02:29:34.855620   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd ...
	I1026 02:29:34.855649   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd: {Name:mk925642f87d27331d95b4da2e25b3e311a30842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.855799   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd ...
	I1026 02:29:34.855811   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd: {Name:mk9740863d6253ed528a1333e9f3510e5305462d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:34.855877   79140 certs.go:381] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt.01d3e0dd -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt
	I1026 02:29:34.855952   79140 certs.go:385] copying /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key.01d3e0dd -> /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key
	I1026 02:29:34.856002   79140 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key
	I1026 02:29:34.856015   79140 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt with IP's: []
	I1026 02:29:35.015622   79140 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt ...
	I1026 02:29:35.015648   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt: {Name:mkbf3e835a69ee7e48d04f654560e899bd3b3674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:35.015795   79140 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key ...
	I1026 02:29:35.015805   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key: {Name:mke780832f50437d0a211749f10c83e726275217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:35.015975   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem (1338 bytes)
	W1026 02:29:35.016012   79140 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615_empty.pem, impossibly tiny 0 bytes
	I1026 02:29:35.016018   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 02:29:35.016039   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/ca.pem (1082 bytes)
	I1026 02:29:35.016064   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/cert.pem (1123 bytes)
	I1026 02:29:35.016085   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/certs/key.pem (1679 bytes)
	I1026 02:29:35.016142   79140 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem (1708 bytes)
	I1026 02:29:35.016685   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 02:29:35.040122   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 02:29:35.062557   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 02:29:35.087485   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 02:29:35.110454   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 02:29:35.133145   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 02:29:35.155074   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 02:29:35.177163   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/bridge-761631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 02:29:35.198652   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/certs/17615.pem --> /usr/share/ca-certificates/17615.pem (1338 bytes)
	I1026 02:29:35.219923   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/ssl/certs/176152.pem --> /usr/share/ca-certificates/176152.pem (1708 bytes)
	I1026 02:29:35.241241   79140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-8680/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 02:29:35.273004   79140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 02:29:35.296045   79140 ssh_runner.go:195] Run: openssl version
	I1026 02:29:35.301917   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17615.pem && ln -fs /usr/share/ca-certificates/17615.pem /etc/ssl/certs/17615.pem"
	I1026 02:29:35.312175   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.316671   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:56 /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.316731   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17615.pem
	I1026 02:29:35.322719   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17615.pem /etc/ssl/certs/51391683.0"
	I1026 02:29:35.333477   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176152.pem && ln -fs /usr/share/ca-certificates/176152.pem /etc/ssl/certs/176152.pem"
	I1026 02:29:35.344537   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.349048   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:56 /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.349098   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176152.pem
	I1026 02:29:35.354681   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176152.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 02:29:35.364998   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 02:29:35.375616   79140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.380100   79140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.380158   79140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 02:29:35.385818   79140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 02:29:35.396002   79140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 02:29:35.400041   79140 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 02:29:35.400091   79140 kubeadm.go:392] StartCluster: {Name:bridge-761631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:bridge-761631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 02:29:35.400166   79140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 02:29:35.400207   79140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 02:29:35.435104   79140 cri.go:89] found id: ""
	I1026 02:29:35.435176   79140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 02:29:35.444318   79140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 02:29:35.453219   79140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 02:29:35.462317   79140 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 02:29:35.462333   79140 kubeadm.go:157] found existing configuration files:
	
	I1026 02:29:35.462369   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 02:29:35.471145   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 02:29:35.471253   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 02:29:35.480515   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 02:29:35.489384   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 02:29:35.489458   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 02:29:35.498443   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 02:29:35.507149   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 02:29:35.507205   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 02:29:35.515975   79140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 02:29:35.524194   79140 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 02:29:35.524249   79140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 02:29:35.533114   79140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 02:29:35.703210   79140 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 02:29:37.588135   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:40.087422   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:45.919150   79140 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 02:29:45.919229   79140 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 02:29:45.919336   79140 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 02:29:45.919438   79140 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 02:29:45.919542   79140 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 02:29:45.919636   79140 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 02:29:45.921123   79140 out.go:235]   - Generating certificates and keys ...
	I1026 02:29:45.921211   79140 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 02:29:45.921284   79140 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 02:29:45.921375   79140 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 02:29:45.921476   79140 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 02:29:45.921569   79140 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 02:29:45.921685   79140 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 02:29:45.921779   79140 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 02:29:45.921937   79140 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-761631 localhost] and IPs [192.168.50.234 127.0.0.1 ::1]
	I1026 02:29:45.922005   79140 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 02:29:45.922173   79140 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-761631 localhost] and IPs [192.168.50.234 127.0.0.1 ::1]
	I1026 02:29:45.922256   79140 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 02:29:45.922338   79140 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 02:29:45.922403   79140 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 02:29:45.922447   79140 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 02:29:45.922493   79140 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 02:29:45.922571   79140 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 02:29:45.922645   79140 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 02:29:45.922740   79140 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 02:29:45.922818   79140 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 02:29:45.922953   79140 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 02:29:45.923031   79140 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 02:29:45.924258   79140 out.go:235]   - Booting up control plane ...
	I1026 02:29:45.924346   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 02:29:45.924440   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 02:29:45.924527   79140 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 02:29:45.924670   79140 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 02:29:45.924805   79140 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 02:29:45.924864   79140 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 02:29:45.925051   79140 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 02:29:45.925186   79140 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 02:29:45.925265   79140 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.99081ms
	I1026 02:29:45.925353   79140 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 02:29:45.925450   79140 kubeadm.go:310] [api-check] The API server is healthy after 5.001677707s
	I1026 02:29:45.925594   79140 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 02:29:45.925767   79140 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 02:29:45.925845   79140 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 02:29:45.926066   79140 kubeadm.go:310] [mark-control-plane] Marking the node bridge-761631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 02:29:45.926130   79140 kubeadm.go:310] [bootstrap-token] Using token: 3a94l2.wnr5sqdsr9c515xe
	I1026 02:29:45.928085   79140 out.go:235]   - Configuring RBAC rules ...
	I1026 02:29:45.928193   79140 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 02:29:45.928296   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 02:29:45.928439   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 02:29:45.928567   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 02:29:45.928690   79140 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 02:29:45.928799   79140 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 02:29:45.928970   79140 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 02:29:45.929015   79140 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 02:29:45.929055   79140 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 02:29:45.929063   79140 kubeadm.go:310] 
	I1026 02:29:45.929112   79140 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 02:29:45.929119   79140 kubeadm.go:310] 
	I1026 02:29:45.929199   79140 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 02:29:45.929207   79140 kubeadm.go:310] 
	I1026 02:29:45.929228   79140 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 02:29:45.929308   79140 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 02:29:45.929381   79140 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 02:29:45.929393   79140 kubeadm.go:310] 
	I1026 02:29:45.929488   79140 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 02:29:45.929497   79140 kubeadm.go:310] 
	I1026 02:29:45.929566   79140 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 02:29:45.929576   79140 kubeadm.go:310] 
	I1026 02:29:45.929652   79140 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 02:29:45.929772   79140 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 02:29:45.929863   79140 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 02:29:45.929872   79140 kubeadm.go:310] 
	I1026 02:29:45.929973   79140 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 02:29:45.930083   79140 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 02:29:45.930095   79140 kubeadm.go:310] 
	I1026 02:29:45.930191   79140 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3a94l2.wnr5sqdsr9c515xe \
	I1026 02:29:45.930310   79140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d \
	I1026 02:29:45.930339   79140 kubeadm.go:310] 	--control-plane 
	I1026 02:29:45.930349   79140 kubeadm.go:310] 
	I1026 02:29:45.930435   79140 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 02:29:45.930451   79140 kubeadm.go:310] 
	I1026 02:29:45.930524   79140 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3a94l2.wnr5sqdsr9c515xe \
	I1026 02:29:45.930634   79140 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3d00111e6fff05cd9321473d76accd14133ef3c53d7bfb8c456a07835eb5f2d 
	I1026 02:29:45.930649   79140 cni.go:84] Creating CNI manager for "bridge"
	I1026 02:29:45.931993   79140 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 02:29:42.088382   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:44.089678   77486 pod_ready.go:103] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:45.586605   77486 pod_ready.go:93] pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.586629   77486 pod_ready.go:82] duration metric: took 12.505940428s for pod "coredns-7c65d6cfc9-46w28" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.586639   77486 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.592528   77486 pod_ready.go:93] pod "etcd-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.592545   77486 pod_ready.go:82] duration metric: took 5.900244ms for pod "etcd-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.592554   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.597525   77486 pod_ready.go:93] pod "kube-apiserver-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.597541   77486 pod_ready.go:82] duration metric: took 4.982933ms for pod "kube-apiserver-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.597550   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.602472   77486 pod_ready.go:93] pod "kube-controller-manager-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.602487   77486 pod_ready.go:82] duration metric: took 4.931952ms for pod "kube-controller-manager-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.602496   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5gn8b" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.607418   77486 pod_ready.go:93] pod "kube-proxy-5gn8b" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.607436   77486 pod_ready.go:82] duration metric: took 4.933679ms for pod "kube-proxy-5gn8b" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.607445   77486 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.984741   77486 pod_ready.go:93] pod "kube-scheduler-flannel-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:29:45.984765   77486 pod_ready.go:82] duration metric: took 377.314061ms for pod "kube-scheduler-flannel-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:45.984776   77486 pod_ready.go:39] duration metric: took 12.913779647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:45.984789   77486 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:29:45.984836   77486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:29:45.999759   77486 api_server.go:72] duration metric: took 21.658882332s to wait for apiserver process to appear ...
	I1026 02:29:45.999790   77486 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:29:45.999813   77486 api_server.go:253] Checking apiserver healthz at https://192.168.61.248:8443/healthz ...
	I1026 02:29:46.005506   77486 api_server.go:279] https://192.168.61.248:8443/healthz returned 200:
	ok
	I1026 02:29:46.006785   77486 api_server.go:141] control plane version: v1.31.2
	I1026 02:29:46.006821   77486 api_server.go:131] duration metric: took 7.023624ms to wait for apiserver health ...
	I1026 02:29:46.006830   77486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:29:46.188748   77486 system_pods.go:59] 7 kube-system pods found
	I1026 02:29:46.188783   77486 system_pods.go:61] "coredns-7c65d6cfc9-46w28" [95524e8b-ebae-4f82-bb93-4c0877c206d7] Running
	I1026 02:29:46.188790   77486 system_pods.go:61] "etcd-flannel-761631" [021f5e7d-f838-41e2-8760-fa7d43b47f97] Running
	I1026 02:29:46.188796   77486 system_pods.go:61] "kube-apiserver-flannel-761631" [370db70a-478d-475b-89f5-f8f78bd856e6] Running
	I1026 02:29:46.188802   77486 system_pods.go:61] "kube-controller-manager-flannel-761631" [23d06d27-2e1f-423b-9314-6193d5812f94] Running
	I1026 02:29:46.188806   77486 system_pods.go:61] "kube-proxy-5gn8b" [9a895cde-6d7b-42aa-ad9e-49943865b4fe] Running
	I1026 02:29:46.188811   77486 system_pods.go:61] "kube-scheduler-flannel-761631" [47391923-c6fb-4b72-b107-6ccf6a1be461] Running
	I1026 02:29:46.188818   77486 system_pods.go:61] "storage-provisioner" [4f546ad1-6af3-40e6-bbb6-4a23e6424ff3] Running
	I1026 02:29:46.188825   77486 system_pods.go:74] duration metric: took 181.988223ms to wait for pod list to return data ...
	I1026 02:29:46.188833   77486 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:29:46.384239   77486 default_sa.go:45] found service account: "default"
	I1026 02:29:46.384263   77486 default_sa.go:55] duration metric: took 195.42289ms for default service account to be created ...
	I1026 02:29:46.384272   77486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:29:46.587198   77486 system_pods.go:86] 7 kube-system pods found
	I1026 02:29:46.587226   77486 system_pods.go:89] "coredns-7c65d6cfc9-46w28" [95524e8b-ebae-4f82-bb93-4c0877c206d7] Running
	I1026 02:29:46.587236   77486 system_pods.go:89] "etcd-flannel-761631" [021f5e7d-f838-41e2-8760-fa7d43b47f97] Running
	I1026 02:29:46.587242   77486 system_pods.go:89] "kube-apiserver-flannel-761631" [370db70a-478d-475b-89f5-f8f78bd856e6] Running
	I1026 02:29:46.587248   77486 system_pods.go:89] "kube-controller-manager-flannel-761631" [23d06d27-2e1f-423b-9314-6193d5812f94] Running
	I1026 02:29:46.587254   77486 system_pods.go:89] "kube-proxy-5gn8b" [9a895cde-6d7b-42aa-ad9e-49943865b4fe] Running
	I1026 02:29:46.587260   77486 system_pods.go:89] "kube-scheduler-flannel-761631" [47391923-c6fb-4b72-b107-6ccf6a1be461] Running
	I1026 02:29:46.587268   77486 system_pods.go:89] "storage-provisioner" [4f546ad1-6af3-40e6-bbb6-4a23e6424ff3] Running
	I1026 02:29:46.587276   77486 system_pods.go:126] duration metric: took 202.998368ms to wait for k8s-apps to be running ...
	I1026 02:29:46.587291   77486 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:29:46.587335   77486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:29:46.601033   77486 system_svc.go:56] duration metric: took 13.736973ms WaitForService to wait for kubelet
	I1026 02:29:46.601084   77486 kubeadm.go:582] duration metric: took 22.260202048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:29:46.601101   77486 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:29:46.784852   77486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:29:46.784878   77486 node_conditions.go:123] node cpu capacity is 2
	I1026 02:29:46.784891   77486 node_conditions.go:105] duration metric: took 183.785972ms to run NodePressure ...
	I1026 02:29:46.784901   77486 start.go:241] waiting for startup goroutines ...
	I1026 02:29:46.784907   77486 start.go:246] waiting for cluster config update ...
	I1026 02:29:46.784916   77486 start.go:255] writing updated cluster config ...
	I1026 02:29:46.785195   77486 ssh_runner.go:195] Run: rm -f paused
	I1026 02:29:46.829900   77486 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:29:46.831605   77486 out.go:177] * Done! kubectl is now configured to use "flannel-761631" cluster and "default" namespace by default
	W1026 02:29:46.840457   77486 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 697471db-7ca4-44ca-9cc4-0edbe17bfeea
	I1026 02:29:45.933282   79140 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 02:29:45.946292   79140 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 02:29:45.963252   79140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 02:29:45.963311   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:45.963351   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-761631 minikube.k8s.io/updated_at=2024_10_26T02_29_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=bridge-761631 minikube.k8s.io/primary=true
	I1026 02:29:46.084640   79140 ops.go:34] apiserver oom_adj: -16
	I1026 02:29:46.084755   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:46.585533   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:47.085231   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:47.585132   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:48.085107   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:48.584952   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:49.085062   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:49.585704   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:50.085441   79140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 02:29:50.179423   79140 kubeadm.go:1113] duration metric: took 4.216166839s to wait for elevateKubeSystemPrivileges
	I1026 02:29:50.179462   79140 kubeadm.go:394] duration metric: took 14.779373824s to StartCluster
	I1026 02:29:50.179485   79140 settings.go:142] acquiring lock: {Name:mkb363a7a1b1532a7f832b54a0283d0a9e3d2b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:50.179566   79140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 02:29:50.180656   79140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-8680/kubeconfig: {Name:mk1ca62d697157a626c1511d120f17a52f7de7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 02:29:50.180888   79140 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 02:29:50.180923   79140 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 02:29:50.180903   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 02:29:50.181035   79140 addons.go:69] Setting default-storageclass=true in profile "bridge-761631"
	I1026 02:29:50.181060   79140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-761631"
	I1026 02:29:50.181069   79140 config.go:182] Loaded profile config "bridge-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 02:29:50.181025   79140 addons.go:69] Setting storage-provisioner=true in profile "bridge-761631"
	I1026 02:29:50.181144   79140 addons.go:234] Setting addon storage-provisioner=true in "bridge-761631"
	I1026 02:29:50.181189   79140 host.go:66] Checking if "bridge-761631" exists ...
	I1026 02:29:50.181648   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.181657   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.181701   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.181734   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.182595   79140 out.go:177] * Verifying Kubernetes components...
	I1026 02:29:50.183753   79140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 02:29:50.196965   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I1026 02:29:50.196966   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1026 02:29:50.197446   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.197502   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.198034   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.198055   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.198178   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.198205   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.198414   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.198572   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.198605   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.199148   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.199196   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.202486   79140 addons.go:234] Setting addon default-storageclass=true in "bridge-761631"
	I1026 02:29:50.202532   79140 host.go:66] Checking if "bridge-761631" exists ...
	I1026 02:29:50.202943   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.202990   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.215612   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1026 02:29:50.216209   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.216739   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.216770   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.217132   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.217347   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.218601   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1026 02:29:50.219227   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.219301   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:50.219773   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.219797   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.220067   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.220504   79140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 02:29:50.220543   79140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 02:29:50.221017   79140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 02:29:50.222310   79140 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:50.222328   79140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 02:29:50.222342   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:50.225627   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.226100   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:50.226129   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.226423   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:50.226614   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:50.226735   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:50.226868   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:50.237479   79140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I1026 02:29:50.237972   79140 main.go:141] libmachine: () Calling .GetVersion
	I1026 02:29:50.238390   79140 main.go:141] libmachine: Using API Version  1
	I1026 02:29:50.238412   79140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 02:29:50.238864   79140 main.go:141] libmachine: () Calling .GetMachineName
	I1026 02:29:50.239000   79140 main.go:141] libmachine: (bridge-761631) Calling .GetState
	I1026 02:29:50.240392   79140 main.go:141] libmachine: (bridge-761631) Calling .DriverName
	I1026 02:29:50.240643   79140 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:50.240659   79140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 02:29:50.240672   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHHostname
	I1026 02:29:50.243239   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.243509   79140 main.go:141] libmachine: (bridge-761631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c2:12", ip: ""} in network mk-bridge-761631: {Iface:virbr2 ExpiryTime:2024-10-26 03:29:19 +0000 UTC Type:0 Mac:52:54:00:62:c2:12 Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:bridge-761631 Clientid:01:52:54:00:62:c2:12}
	I1026 02:29:50.243528   79140 main.go:141] libmachine: (bridge-761631) DBG | domain bridge-761631 has defined IP address 192.168.50.234 and MAC address 52:54:00:62:c2:12 in network mk-bridge-761631
	I1026 02:29:50.243778   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHPort
	I1026 02:29:50.243954   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHKeyPath
	I1026 02:29:50.244078   79140 main.go:141] libmachine: (bridge-761631) Calling .GetSSHUsername
	I1026 02:29:50.244191   79140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/bridge-761631/id_rsa Username:docker}
	I1026 02:29:50.418926   79140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 02:29:50.437628   79140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 02:29:50.556343   79140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 02:29:50.663762   79140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 02:29:50.858117   79140 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1026 02:29:50.859228   79140 node_ready.go:35] waiting up to 15m0s for node "bridge-761631" to be "Ready" ...
	I1026 02:29:50.875628   79140 node_ready.go:49] node "bridge-761631" has status "Ready":"True"
	I1026 02:29:50.875655   79140 node_ready.go:38] duration metric: took 16.40424ms for node "bridge-761631" to be "Ready" ...
	I1026 02:29:50.875668   79140 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:29:50.893111   79140 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace to be "Ready" ...
	I1026 02:29:50.989070   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:50.989125   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:50.989395   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:50.989428   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:50.989438   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:50.989442   79140 main.go:141] libmachine: (bridge-761631) DBG | Closing plugin on server side
	I1026 02:29:50.989448   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:50.989715   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:50.989732   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.004388   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.004411   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.004728   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.004823   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.004799   79140 main.go:141] libmachine: (bridge-761631) DBG | Closing plugin on server side
	I1026 02:29:51.380622   79140 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-761631" context rescaled to 1 replicas
	I1026 02:29:51.493156   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.493186   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.493458   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.493472   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.493481   79140 main.go:141] libmachine: Making call to close driver server
	I1026 02:29:51.493488   79140 main.go:141] libmachine: (bridge-761631) Calling .Close
	I1026 02:29:51.493752   79140 main.go:141] libmachine: Successfully made call to close driver server
	I1026 02:29:51.493769   79140 main.go:141] libmachine: Making call to close connection to plugin binary
	I1026 02:29:51.495504   79140 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1026 02:29:51.496896   79140 addons.go:510] duration metric: took 1.315972716s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 02:29:52.899525   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:54.899623   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:56.900017   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:29:59.398542   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:01.399647   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:01.899831   79140 pod_ready.go:98] pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:30:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.234 HostIPs:[{IP:192.168.50
.234}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:29:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:29:51 +0000 UTC,FinishedAt:2024-10-26 02:30:01 +0000 UTC,ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d Started:0xc00203af90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000882800} {Name:kube-api-access-f8bsr MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000882810}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:30:01.899871   79140 pod_ready.go:82] duration metric: took 11.00673045s for pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace to be "Ready" ...
	E1026 02:30:01.899886   79140 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-k9kvl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:30:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-26 02:29:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.234 HostIPs:[{IP:192.168.50.234}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-26 02:29:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-26 02:29:51 +0000 UTC,FinishedAt:2024-10-26 02:30:01 +0000 UTC,ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://4af0ca4c814fadb8bc70871a1e5abe280966f290d195249545dbdba00a03d01d Started:0xc00203af90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000882800} {Name:kube-api-access-f8bsr MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000882810}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1026 02:30:01.899902   79140 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:03.906256   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:05.929917   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:08.406456   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:10.907420   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:13.405880   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:15.406701   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:17.906220   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:19.906580   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:22.406051   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:24.406236   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:26.407004   79140 pod_ready.go:103] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"False"
	I1026 02:30:28.905840   79140 pod_ready.go:93] pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.905866   79140 pod_ready.go:82] duration metric: took 27.00595527s for pod "coredns-7c65d6cfc9-nggsr" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.905877   79140 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.909975   79140 pod_ready.go:93] pod "etcd-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.909998   79140 pod_ready.go:82] duration metric: took 4.113104ms for pod "etcd-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.910007   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.913593   79140 pod_ready.go:93] pod "kube-apiserver-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.913613   79140 pod_ready.go:82] duration metric: took 3.599819ms for pod "kube-apiserver-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.913621   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.918213   79140 pod_ready.go:93] pod "kube-controller-manager-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.918232   79140 pod_ready.go:82] duration metric: took 4.60513ms for pod "kube-controller-manager-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.918240   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b657k" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.924642   79140 pod_ready.go:93] pod "kube-proxy-b657k" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:28.924662   79140 pod_ready.go:82] duration metric: took 6.416092ms for pod "kube-proxy-b657k" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:28.924670   79140 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:29.305235   79140 pod_ready.go:93] pod "kube-scheduler-bridge-761631" in "kube-system" namespace has status "Ready":"True"
	I1026 02:30:29.305259   79140 pod_ready.go:82] duration metric: took 380.583389ms for pod "kube-scheduler-bridge-761631" in "kube-system" namespace to be "Ready" ...
	I1026 02:30:29.305267   79140 pod_ready.go:39] duration metric: took 38.429587744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 02:30:29.305282   79140 api_server.go:52] waiting for apiserver process to appear ...
	I1026 02:30:29.305347   79140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 02:30:29.321031   79140 api_server.go:72] duration metric: took 39.140108344s to wait for apiserver process to appear ...
	I1026 02:30:29.321059   79140 api_server.go:88] waiting for apiserver healthz status ...
	I1026 02:30:29.321078   79140 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I1026 02:30:29.325233   79140 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I1026 02:30:29.326297   79140 api_server.go:141] control plane version: v1.31.2
	I1026 02:30:29.326322   79140 api_server.go:131] duration metric: took 5.254713ms to wait for apiserver health ...
	I1026 02:30:29.326330   79140 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 02:30:29.506763   79140 system_pods.go:59] 7 kube-system pods found
	I1026 02:30:29.506790   79140 system_pods.go:61] "coredns-7c65d6cfc9-nggsr" [56b01394-480f-495b-922a-ed2b483f294e] Running
	I1026 02:30:29.506795   79140 system_pods.go:61] "etcd-bridge-761631" [67fe00a3-64c4-4206-91eb-821af3fef7da] Running
	I1026 02:30:29.506798   79140 system_pods.go:61] "kube-apiserver-bridge-761631" [b2d08738-29e9-410e-aa6a-373816a7d585] Running
	I1026 02:30:29.506802   79140 system_pods.go:61] "kube-controller-manager-bridge-761631" [8f000fcc-5dca-4b07-87fd-7dbf09ed82c4] Running
	I1026 02:30:29.506805   79140 system_pods.go:61] "kube-proxy-b657k" [9afd730f-3a54-454b-9188-f1f24192cf54] Running
	I1026 02:30:29.506808   79140 system_pods.go:61] "kube-scheduler-bridge-761631" [1cac5675-b5aa-4239-b6c6-1d3b5d9e69cf] Running
	I1026 02:30:29.506810   79140 system_pods.go:61] "storage-provisioner" [c600327b-8a81-46eb-9730-37f8e45fe0be] Running
	I1026 02:30:29.506816   79140 system_pods.go:74] duration metric: took 180.479854ms to wait for pod list to return data ...
	I1026 02:30:29.506821   79140 default_sa.go:34] waiting for default service account to be created ...
	I1026 02:30:29.704417   79140 default_sa.go:45] found service account: "default"
	I1026 02:30:29.704444   79140 default_sa.go:55] duration metric: took 197.616958ms for default service account to be created ...
	I1026 02:30:29.704453   79140 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 02:30:29.906080   79140 system_pods.go:86] 7 kube-system pods found
	I1026 02:30:29.906119   79140 system_pods.go:89] "coredns-7c65d6cfc9-nggsr" [56b01394-480f-495b-922a-ed2b483f294e] Running
	I1026 02:30:29.906128   79140 system_pods.go:89] "etcd-bridge-761631" [67fe00a3-64c4-4206-91eb-821af3fef7da] Running
	I1026 02:30:29.906134   79140 system_pods.go:89] "kube-apiserver-bridge-761631" [b2d08738-29e9-410e-aa6a-373816a7d585] Running
	I1026 02:30:29.906139   79140 system_pods.go:89] "kube-controller-manager-bridge-761631" [8f000fcc-5dca-4b07-87fd-7dbf09ed82c4] Running
	I1026 02:30:29.906145   79140 system_pods.go:89] "kube-proxy-b657k" [9afd730f-3a54-454b-9188-f1f24192cf54] Running
	I1026 02:30:29.906148   79140 system_pods.go:89] "kube-scheduler-bridge-761631" [1cac5675-b5aa-4239-b6c6-1d3b5d9e69cf] Running
	I1026 02:30:29.906152   79140 system_pods.go:89] "storage-provisioner" [c600327b-8a81-46eb-9730-37f8e45fe0be] Running
	I1026 02:30:29.906158   79140 system_pods.go:126] duration metric: took 201.700394ms to wait for k8s-apps to be running ...
	I1026 02:30:29.906164   79140 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 02:30:29.906210   79140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 02:30:29.920749   79140 system_svc.go:56] duration metric: took 14.573227ms WaitForService to wait for kubelet
	I1026 02:30:29.920779   79140 kubeadm.go:582] duration metric: took 39.739859653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 02:30:29.920802   79140 node_conditions.go:102] verifying NodePressure condition ...
	I1026 02:30:30.104198   79140 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 02:30:30.104222   79140 node_conditions.go:123] node cpu capacity is 2
	I1026 02:30:30.104232   79140 node_conditions.go:105] duration metric: took 183.42671ms to run NodePressure ...
	I1026 02:30:30.104243   79140 start.go:241] waiting for startup goroutines ...
	I1026 02:30:30.104250   79140 start.go:246] waiting for cluster config update ...
	I1026 02:30:30.104260   79140 start.go:255] writing updated cluster config ...
	I1026 02:30:30.104497   79140 ssh_runner.go:195] Run: rm -f paused
	I1026 02:30:30.151937   79140 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1026 02:30:30.153939   79140 out.go:177] * Done! kubectl is now configured to use "bridge-761631" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.147557448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910484147482022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=626c70dc-9a80-4552-bf1b-05d1ed5d5bdc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.148031410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f978db9a-6bc3-47d1-9be0-117f4743b4fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.148100255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f978db9a-6bc3-47d1-9be0-117f4743b4fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.148305634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f978db9a-6bc3-47d1-9be0-117f4743b4fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.187095518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6efca2f6-b075-452c-841a-df302d0adc06 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.187217030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6efca2f6-b075-452c-841a-df302d0adc06 name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.188066943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b94aebb-f759-48d9-98ef-efff32c9f0ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.188788768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910484188645765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b94aebb-f759-48d9-98ef-efff32c9f0ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.189250311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a11227-e098-4b31-88c8-b5c7218fc433 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.189318141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a11227-e098-4b31-88c8-b5c7218fc433 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.189585400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75a11227-e098-4b31-88c8-b5c7218fc433 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.222635525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fddfaabe-7dba-4c48-8605-f631a8d0e50f name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.222747247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fddfaabe-7dba-4c48-8605-f631a8d0e50f name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.223925297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87d9aa2f-e851-4b2b-a66c-696cd191dd25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.224314551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910484224295235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87d9aa2f-e851-4b2b-a66c-696cd191dd25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.224797707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20a4cf64-de46-4106-ac7f-5b461e223c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.224854699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20a4cf64-de46-4106-ac7f-5b461e223c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.225064052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20a4cf64-de46-4106-ac7f-5b461e223c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.254152612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e120870-097f-4d64-aa4d-65647ee98f3a name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.254223198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e120870-097f-4d64-aa4d-65647ee98f3a name=/runtime.v1.RuntimeService/Version
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.255448141Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4960e2d8-2adc-4e1d-86ac-5c4a97e08aef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.255880546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910484255859861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4960e2d8-2adc-4e1d-86ac-5c4a97e08aef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.256296226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c86949d-d1f9-4a99-9429-a5afc8df1c57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.256362781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c86949d-d1f9-4a99-9429-a5afc8df1c57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 02:41:24 default-k8s-diff-port-661357 crio[708]: time="2024-10-26 02:41:24.256616893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729909272342312554,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce0e41970bb56da898602f64b6eb9f11644a3f9d8cd20bf59ca7748de2be71,PodSandboxId:ca8867f88fa0b7395a3b666f1e65e5b00af426893aed65e0726a6339c7d4ff65,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729909252273224103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9b0d313-34c5-4a3b-9172-ea1015817010,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416,PodSandboxId:1c67ad179fc6ac8ec880e769ad49b5604bc648df638b1eda2f5614dcf4d8883a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729909249140596129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpxp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3ea4ee4-aab2-4c92-ab2f-e1026c703ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a,PodSandboxId:a1028bd8f05ef54287c48df04b96fa14767b47848c03179218f331255297faa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1729909241501376375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c947q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e41c6a1e-1
a8e-4c49-93ff-e0c60a87ea69,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d,PodSandboxId:ff2b794780fc51d1df85c4c7d8481d3636eb5aeaacef6049417f58342aa9445a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729909241485916288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90c86915-4d74-4774-b8cd
-86bf37672a55,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55,PodSandboxId:29430ce1be5a44f71f48314591f66659f730e318fddc1961b4e87b465907e46c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1729909237921167837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f84d74b5e63a81aeb0f93
07c8959d094,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e,PodSandboxId:500d0afc9dfd3892496e02ee9eb36a4751548566039582e8bf0c778d13578194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1729909237908389459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbae12a8278ff238e662a15
d0686d074,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8,PodSandboxId:29ed2f42a7fd5b86ff1e9622fdede7a14efd10faa8e34903edd8ea0dc48f8e19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1729909237895427824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ef9976e774bcaa0181689afdda68dcb0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72,PodSandboxId:5532133f711cf97c4fb57586ed1f2a1187bb2092a3f702f06765813a88d4768e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729909237925398510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-661357,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e1bb8364b888bb16a22a8938242f
16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c86949d-d1f9-4a99-9429-a5afc8df1c57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f5715a92670a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   ff2b794780fc5       storage-provisioner
	f7ce0e41970bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   ca8867f88fa0b       busybox
	e298a85093930       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   1c67ad179fc6a       coredns-7c65d6cfc9-xpxp4
	da7e523b4bbb0       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   a1028bd8f05ef       kube-proxy-c947q
	17b28d6cdb6a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   ff2b794780fc5       storage-provisioner
	b57cb0310518d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   5532133f711cf       etcd-default-k8s-diff-port-661357
	c185a46f0bdfd       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      20 minutes ago      Running             kube-scheduler            1                   29430ce1be5a4       kube-scheduler-default-k8s-diff-port-661357
	c7c70f177d310       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      20 minutes ago      Running             kube-apiserver            1                   500d0afc9dfd3       kube-apiserver-default-k8s-diff-port-661357
	a4307158d97a1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      20 minutes ago      Running             kube-controller-manager   1                   29ed2f42a7fd5       kube-controller-manager-default-k8s-diff-port-661357
	
	
	==> coredns [e298a85093930e8cbc0cd4497c9c0efa98f51b71fec3e397093f25e62da75416] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51918 - 41826 "HINFO IN 4582937509147534390.1757325468208855726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025059784s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-661357
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-661357
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=default-k8s-diff-port-661357
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T02_12_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 02:11:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-661357
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 02:41:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 02:36:28 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 02:36:28 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 02:36:28 +0000   Sat, 26 Oct 2024 02:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 02:36:28 +0000   Sat, 26 Oct 2024 02:20:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.18
	  Hostname:    default-k8s-diff-port-661357
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3995c3d63394bf89d65eca9d2425260
	  System UUID:                c3995c3d-6339-4bf8-9d65-eca9d2425260
	  Boot ID:                    6939014d-c7b4-47cf-adfa-355e3ba8660d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-xpxp4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-661357                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-661357             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-661357    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-c947q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-661357             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-jkl5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-661357 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-661357 event: Registered Node default-k8s-diff-port-661357 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-661357 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-661357 event: Registered Node default-k8s-diff-port-661357 in Controller
	
	
	==> dmesg <==
	[Oct26 02:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051355] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037341] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.849005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876726] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568076] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.624957] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.062482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062809] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.202814] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.117981] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.272701] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.082704] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.812008] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059730] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.498520] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.484663] systemd-fstab-generator[1539]: Ignoring "noauto" option for root device
	[  +3.239561] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.144037] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [b57cb0310518d4260d9501aa5e19dbe689af3fe2e45e50fc4dd20bf23a0e6e72] <==
	{"level":"info","ts":"2024-10-26T02:27:52.912564Z","caller":"traceutil/trace.go:171","msg":"trace[1350654715] transaction","detail":"{read_only:false; response_revision:962; number_of_response:1; }","duration":"209.334507ms","start":"2024-10-26T02:27:52.703209Z","end":"2024-10-26T02:27:52.912544Z","steps":["trace[1350654715] 'process raft request'  (duration: 209.115568ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:28:30.661702Z","caller":"traceutil/trace.go:171","msg":"trace[713476800] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"105.697391ms","start":"2024-10-26T02:28:30.555982Z","end":"2024-10-26T02:28:30.661679Z","steps":["trace[713476800] 'process raft request'  (duration: 105.487292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:29:11.809929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.52955ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:11.810257Z","caller":"traceutil/trace.go:171","msg":"trace[2092483729] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"381.923158ms","start":"2024-10-26T02:29:11.428317Z","end":"2024-10-26T02:29:11.810240Z","steps":["trace[2092483729] 'range keys from in-memory index tree'  (duration: 381.509619ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:29:11.810281Z","caller":"traceutil/trace.go:171","msg":"trace[692266272] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"383.666502ms","start":"2024-10-26T02:29:11.426603Z","end":"2024-10-26T02:29:11.810269Z","steps":["trace[692266272] 'process raft request'  (duration: 360.516118ms)","trace[692266272] 'compare'  (duration: 22.510822ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:11.810600Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-26T02:29:11.426583Z","time spent":"383.83336ms","remote":"127.0.0.1:49422","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" mod_revision:1014 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qzh5p77s5bgvam2krmy2un4zhe\" > >"}
	{"level":"warn","ts":"2024-10-26T02:29:11.810136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.472459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:11.811081Z","caller":"traceutil/trace.go:171","msg":"trace[1079584733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1023; }","duration":"190.442669ms","start":"2024-10-26T02:29:11.620627Z","end":"2024-10-26T02:29:11.811069Z","steps":["trace[1079584733] 'agreement among raft nodes before linearized reading'  (duration: 189.40885ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:29:11.809972Z","caller":"traceutil/trace.go:171","msg":"trace[371637022] linearizableReadLoop","detail":"{readStateIndex:1165; appliedIndex:1164; }","duration":"189.314741ms","start":"2024-10-26T02:29:11.620631Z","end":"2024-10-26T02:29:11.809945Z","steps":["trace[371637022] 'read index received'  (duration: 166.433072ms)","trace[371637022] 'applied index is now lower than readState.Index'  (duration: 22.881095ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:35.740830Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.74092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-26T02:29:35.741093Z","caller":"traceutil/trace.go:171","msg":"trace[1747380013] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1043; }","duration":"121.031075ms","start":"2024-10-26T02:29:35.620044Z","end":"2024-10-26T02:29:35.741075Z","steps":["trace[1747380013] 'range keys from in-memory index tree'  (duration: 120.677879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-26T02:29:37.619443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.95841ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16281799934639471294 > lease_revoke:<id:61f492c6a029fa64>","response":"size:27"}
	{"level":"info","ts":"2024-10-26T02:29:37.619616Z","caller":"traceutil/trace.go:171","msg":"trace[1385581497] linearizableReadLoop","detail":"{readStateIndex:1191; appliedIndex:1190; }","duration":"138.346081ms","start":"2024-10-26T02:29:37.481260Z","end":"2024-10-26T02:29:37.619606Z","steps":["trace[1385581497] 'read index received'  (duration: 28.167966ms)","trace[1385581497] 'applied index is now lower than readState.Index'  (duration: 110.177046ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-26T02:29:37.619874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.613679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-10-26T02:29:37.619952Z","caller":"traceutil/trace.go:171","msg":"trace[1437726017] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1043; }","duration":"138.700645ms","start":"2024-10-26T02:29:37.481239Z","end":"2024-10-26T02:29:37.619939Z","steps":["trace[1437726017] 'agreement among raft nodes before linearized reading'  (duration: 138.519987ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:30:34.001866Z","caller":"traceutil/trace.go:171","msg":"trace[1268697056] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"118.261598ms","start":"2024-10-26T02:30:33.883578Z","end":"2024-10-26T02:30:34.001839Z","steps":["trace[1268697056] 'process raft request'  (duration: 118.173107ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-26T02:30:39.602955Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-10-26T02:30:39.612539Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.233046ms","hash":529089037,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-26T02:30:39.612631Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":529089037,"revision":852,"compact-revision":-1}
	{"level":"info","ts":"2024-10-26T02:35:39.613774Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1093}
	{"level":"info","ts":"2024-10-26T02:35:39.617727Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1093,"took":"3.335551ms","hash":1072039834,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-26T02:35:39.617809Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1072039834,"revision":1093,"compact-revision":852}
	{"level":"info","ts":"2024-10-26T02:40:39.620302Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1336}
	{"level":"info","ts":"2024-10-26T02:40:39.623612Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1336,"took":"2.997048ms","hash":3556021749,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-26T02:40:39.623658Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3556021749,"revision":1336,"compact-revision":1093}
	
	
	==> kernel <==
	 02:41:24 up 21 min,  0 users,  load average: 0.14, 0.09, 0.08
	Linux default-k8s-diff-port-661357 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7c70f177d310d1c031f73c6b2ff7c06f8efc895b1d7b85879a0e511b0a0bc3e] <==
	I1026 02:36:41.839977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:36:41.841160       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:38:41.840137       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:38:41.840251       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 02:38:41.841383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:38:41.841460       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:38:41.841570       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:38:41.842762       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 02:40:40.840321       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:40:40.840472       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1026 02:40:41.842371       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:40:41.842438       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1026 02:40:41.842377       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:40:41.842677       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 02:40:41.843681       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:40:41.843739       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a4307158d97a1f023c4bf00810ab965861f800dc3a9dc7298903ae2fa6587de8] <==
	E1026 02:36:14.547615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:36:15.051256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:36:28.363983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-661357"
	E1026 02:36:44.553682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:36:45.059066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:37:05.153648       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="256.428µs"
	E1026 02:37:14.560168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:37:15.066165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1026 02:37:17.151738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="131.781µs"
	E1026 02:37:44.565816       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:37:45.074094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:38:14.571657       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:38:15.081237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:38:44.577957       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:38:45.088447       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:39:14.584204       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:39:15.096255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:39:44.590379       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:39:45.106631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:40:14.595675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:40:15.113953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:40:44.601764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:40:45.121134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1026 02:41:14.608004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 02:41:15.127714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [da7e523b4bbb0e98b662c4a307f21a8c25f87bcad8f8297ca973ed9707dcdb5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 02:20:41.671267       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 02:20:41.682120       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E1026 02:20:41.682253       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 02:20:41.728180       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 02:20:41.728711       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 02:20:41.728797       1 server_linux.go:169] "Using iptables Proxier"
	I1026 02:20:41.734738       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 02:20:41.735974       1 server.go:483] "Version info" version="v1.31.2"
	I1026 02:20:41.736050       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:20:41.742718       1 config.go:199] "Starting service config controller"
	I1026 02:20:41.742771       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 02:20:41.742792       1 config.go:105] "Starting endpoint slice config controller"
	I1026 02:20:41.742796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 02:20:41.743220       1 config.go:328] "Starting node config controller"
	I1026 02:20:41.743280       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 02:20:41.844589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 02:20:41.844712       1 shared_informer.go:320] Caches are synced for service config
	I1026 02:20:41.845700       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c185a46f0bdfda3fcd8b0284eef3f5549064dd8a04868a38cca002d50405ec55] <==
	I1026 02:20:39.230264       1 serving.go:386] Generated self-signed cert in-memory
	W1026 02:20:40.763086       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 02:20:40.763123       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 02:20:40.763139       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 02:20:40.763149       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 02:20:40.827012       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 02:20:40.827115       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 02:20:40.830012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 02:20:40.830674       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 02:20:40.830756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 02:20:40.830776       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 02:20:40.931071       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 02:40:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:06.401129     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910406400831534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:15 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:15.136606     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:40:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:16.402995     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910416402662901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:16.403069     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910416402662901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:26 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:26.404071     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910426403840060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:26 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:26.404119     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910426403840060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:30 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:30.136582     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:36.150224     916 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:36.406012     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910436405736264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:36 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:36.406048     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910436405736264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:44 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:44.137752     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:40:46 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:46.407548     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910446407281019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:46 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:46.407585     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910446407281019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:56 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:56.408794     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910456408425169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:56 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:56.408864     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910456408425169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:40:59 default-k8s-diff-port-661357 kubelet[916]: E1026 02:40:59.136259     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:41:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:41:06.410317     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910466410089249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:41:06 default-k8s-diff-port-661357 kubelet[916]: E1026 02:41:06.410359     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910466410089249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:41:11 default-k8s-diff-port-661357 kubelet[916]: E1026 02:41:11.135871     916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jkl5g" podUID="023bd779-83b7-42ef-893d-f7ab70f08ae7"
	Oct 26 02:41:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:41:16.411652     916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910476411251069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 02:41:16 default-k8s-diff-port-661357 kubelet[916]: E1026 02:41:16.411947     916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729910476411251069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [17b28d6cdb6a10990233f0e005626378df7b7361f305d390579359856888231d] <==
	I1026 02:20:41.582048       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 02:21:11.585304       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5f5715a92670ac6389fcc35e5e3ac3c4cf5400af7388877e53fe5488b1667723] <==
	I1026 02:21:12.431353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:21:12.442729       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:21:12.442795       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:21:29.845768       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:21:29.846184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754!
	I1026 02:21:29.849588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09cda3dd-67fa-4ae7-ae56-1289dd15961d", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754 became leader
	I1026 02:21:29.947337       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661357_d8be50d4-5354-4142-959b-3fee8c75f754!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jkl5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g: exit status 1 (60.305209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jkl5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-661357 describe pod metrics-server-6867b74b74-jkl5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (436.59s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 32.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 16.7
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 79.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 161.12
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.48
35 TestAddons/parallel/Registry 17.94
37 TestAddons/parallel/InspektorGadget 11.99
40 TestAddons/parallel/CSI 56.61
41 TestAddons/parallel/Headlamp 21.64
42 TestAddons/parallel/CloudSpanner 5.53
43 TestAddons/parallel/LocalPath 54.88
44 TestAddons/parallel/NvidiaDevicePlugin 6.59
45 TestAddons/parallel/Yakd 11.7
48 TestCertOptions 50.17
49 TestCertExpiration 238.4
51 TestForceSystemdFlag 55.68
52 TestForceSystemdEnv 89.03
54 TestKVMDriverInstallOrUpdate 4.43
58 TestErrorSpam/setup 40.54
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.62
63 TestErrorSpam/stop 5.29
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.75
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.48
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.91
75 TestFunctional/serial/CacheCmd/cache/add_local 2.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 34.83
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.33
86 TestFunctional/serial/LogsFileCmd 1.35
87 TestFunctional/serial/InvalidService 3.82
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 26.97
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 10.46
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 46.64
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.35
103 TestFunctional/parallel/MySQL 24.37
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.54
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
113 TestFunctional/parallel/License 0.59
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
125 TestFunctional/parallel/ProfileCmd/profile_list 0.34
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.46
129 TestFunctional/parallel/MountCmd/any-port 8.63
130 TestFunctional/parallel/ServiceCmd/List 0.24
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.24
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
133 TestFunctional/parallel/ServiceCmd/Format 0.29
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.68
140 TestFunctional/parallel/ImageCommands/Setup 1.75
141 TestFunctional/parallel/MountCmd/specific-port 2.09
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.31
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.49
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.1
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
150 TestFunctional/parallel/ImageCommands/ImageRemove 1.5
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.23
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 194.1
160 TestMultiControlPlane/serial/DeployApp 6.77
161 TestMultiControlPlane/serial/PingHostFromPods 1.19
162 TestMultiControlPlane/serial/AddWorkerNode 56.84
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
165 TestMultiControlPlane/serial/CopyFile 12.96
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/RestartCluster 327.76
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
176 TestMultiControlPlane/serial/AddSecondaryNode 76.09
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
181 TestJSONOutput/start/Command 84.17
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.68
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.54
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.47
213 TestMountStart/serial/StartWithMountFirst 27.09
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 28.57
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.89
218 TestMountStart/serial/VerifyMountPostDelete 0.38
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.55
221 TestMountStart/serial/VerifyMountPostStop 0.36
224 TestMultiNode/serial/FreshStart2Nodes 107.28
225 TestMultiNode/serial/DeployApp2Nodes 5.23
226 TestMultiNode/serial/PingHostFrom2Pods 0.77
227 TestMultiNode/serial/AddNode 47.5
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.56
230 TestMultiNode/serial/CopyFile 7.14
231 TestMultiNode/serial/StopNode 2.25
232 TestMultiNode/serial/StartAfterStop 39.48
234 TestMultiNode/serial/DeleteNode 1.99
236 TestMultiNode/serial/RestartMultiNode 198.38
237 TestMultiNode/serial/ValidateNameConflict 43.76
244 TestScheduledStopUnix 109.2
248 TestRunningBinaryUpgrade 174.54
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 112.98
255 TestNoKubernetes/serial/StartWithStopK8s 18.86
256 TestNoKubernetes/serial/Start 40.05
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 1.75
259 TestNoKubernetes/serial/Stop 1.3
260 TestNoKubernetes/serial/StartNoArgs 46.18
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
262 TestStoppedBinaryUpgrade/Setup 2.26
263 TestStoppedBinaryUpgrade/Upgrade 109.63
272 TestPause/serial/Start 85.73
280 TestNetworkPlugins/group/false 4.12
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
288 TestStartStop/group/no-preload/serial/FirstStart 105.97
289 TestPause/serial/SecondStartNoReconfiguration 53.92
290 TestPause/serial/Pause 0.68
291 TestPause/serial/VerifyStatus 0.23
292 TestPause/serial/Unpause 0.6
293 TestPause/serial/PauseAgain 0.71
294 TestPause/serial/DeletePaused 0.78
295 TestPause/serial/VerifyDeletedResources 0.6
297 TestStartStop/group/embed-certs/serial/FirstStart 55.09
298 TestStartStop/group/no-preload/serial/DeployApp 11.33
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
301 TestStartStop/group/embed-certs/serial/DeployApp 11.29
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
307 TestStartStop/group/no-preload/serial/SecondStart 620.76
309 TestStartStop/group/embed-certs/serial/SecondStart 556.06
310 TestStartStop/group/old-k8s-version/serial/Stop 5.31
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.3
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 579.38
327 TestStartStop/group/newest-cni/serial/FirstStart 48.72
329 TestNetworkPlugins/group/auto/Start 93.98
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
332 TestStartStop/group/newest-cni/serial/Stop 7.35
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/newest-cni/serial/SecondStart 38.02
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
338 TestStartStop/group/newest-cni/serial/Pause 2.3
339 TestNetworkPlugins/group/kindnet/Start 61.84
340 TestNetworkPlugins/group/calico/Start 98.12
341 TestNetworkPlugins/group/auto/KubeletFlags 0.27
342 TestNetworkPlugins/group/auto/NetCatPod 14.3
343 TestNetworkPlugins/group/auto/DNS 0.16
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestNetworkPlugins/group/auto/HairPin 0.11
346 TestNetworkPlugins/group/custom-flannel/Start 70.68
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
349 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
350 TestNetworkPlugins/group/kindnet/DNS 0.15
351 TestNetworkPlugins/group/kindnet/Localhost 0.13
352 TestNetworkPlugins/group/kindnet/HairPin 0.13
353 TestNetworkPlugins/group/enable-default-cni/Start 55.75
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.27
356 TestNetworkPlugins/group/calico/NetCatPod 11.3
357 TestNetworkPlugins/group/calico/DNS 0.16
358 TestNetworkPlugins/group/calico/Localhost 0.12
359 TestNetworkPlugins/group/calico/HairPin 0.13
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
362 TestNetworkPlugins/group/flannel/Start 70.09
363 TestNetworkPlugins/group/custom-flannel/DNS 0.17
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
368 TestNetworkPlugins/group/bridge/Start 93.36
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
374 TestNetworkPlugins/group/flannel/NetCatPod 10.21
375 TestNetworkPlugins/group/flannel/DNS 0.15
376 TestNetworkPlugins/group/flannel/Localhost 0.12
377 TestNetworkPlugins/group/flannel/HairPin 0.12
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
379 TestNetworkPlugins/group/bridge/NetCatPod 9.23
380 TestNetworkPlugins/group/bridge/DNS 0.14
381 TestNetworkPlugins/group/bridge/Localhost 0.12
382 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (32.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-699862 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-699862 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (32.095687934s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (32.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1026 00:43:37.365770   17615 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1026 00:43:37.365894   17615 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-699862
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-699862: exit status 85 (57.793379ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-699862 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |          |
	|         | -p download-only-699862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:05.310243   17627 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:05.310599   17627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:05.310610   17627 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:05.310615   17627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:05.310799   17627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	W1026 00:43:05.310926   17627 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19868-8680/.minikube/config/config.json: open /home/jenkins/minikube-integration/19868-8680/.minikube/config/config.json: no such file or directory
	I1026 00:43:05.311480   17627 out.go:352] Setting JSON to true
	I1026 00:43:05.312398   17627 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1525,"bootTime":1729901860,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:43:05.312491   17627 start.go:139] virtualization: kvm guest
	I1026 00:43:05.315079   17627 out.go:97] [download-only-699862] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1026 00:43:05.315211   17627 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 00:43:05.315261   17627 notify.go:220] Checking for updates...
	I1026 00:43:05.316675   17627 out.go:169] MINIKUBE_LOCATION=19868
	I1026 00:43:05.318161   17627 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:05.319440   17627 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:43:05.320658   17627 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:05.321920   17627 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 00:43:05.324078   17627 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:43:05.324263   17627 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:05.419161   17627 out.go:97] Using the kvm2 driver based on user configuration
	I1026 00:43:05.419185   17627 start.go:297] selected driver: kvm2
	I1026 00:43:05.419191   17627 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:43:05.419519   17627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:05.419640   17627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:43:05.434359   17627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:43:05.434427   17627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:05.434937   17627 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1026 00:43:05.435076   17627 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 00:43:05.435103   17627 cni.go:84] Creating CNI manager for ""
	I1026 00:43:05.435153   17627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:43:05.435161   17627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:05.435220   17627 start.go:340] cluster config:
	{Name:download-only-699862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-699862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:05.435394   17627 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:05.437342   17627 out.go:97] Downloading VM boot image ...
	I1026 00:43:05.437384   17627 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1026 00:43:15.752300   17627 out.go:97] Starting "download-only-699862" primary control-plane node in "download-only-699862" cluster
	I1026 00:43:15.752320   17627 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 00:43:15.847220   17627 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1026 00:43:15.847253   17627 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:15.847402   17627 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1026 00:43:15.849247   17627 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1026 00:43:15.849271   17627 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:43:15.946154   17627 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-699862 host does not exist
	  To start a cluster, run: "minikube start -p download-only-699862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-699862
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (16.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-798188 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-798188 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.700460705s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (16.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1026 00:43:54.376578   17615 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1026 00:43:54.376619   17615 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-798188
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-798188: exit status 85 (59.641198ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-699862 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | -p download-only-699862        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-699862        | download-only-699862 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| start   | -o=json --download-only        | download-only-798188 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | -p download-only-798188        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:37.714855   17905 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:37.715382   17905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:37.715430   17905 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:37.715447   17905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:37.715883   17905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:43:37.716773   17905 out.go:352] Setting JSON to true
	I1026 00:43:37.717573   17905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1558,"bootTime":1729901860,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:43:37.717663   17905 start.go:139] virtualization: kvm guest
	I1026 00:43:37.719371   17905 out.go:97] [download-only-798188] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:43:37.719488   17905 notify.go:220] Checking for updates...
	I1026 00:43:37.720652   17905 out.go:169] MINIKUBE_LOCATION=19868
	I1026 00:43:37.722024   17905 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:37.723316   17905 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:43:37.724485   17905 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:43:37.725769   17905 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 00:43:37.727883   17905 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:43:37.728068   17905 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:37.758843   17905 out.go:97] Using the kvm2 driver based on user configuration
	I1026 00:43:37.758864   17905 start.go:297] selected driver: kvm2
	I1026 00:43:37.758869   17905 start.go:901] validating driver "kvm2" against <nil>
	I1026 00:43:37.759238   17905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:37.759322   17905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19868-8680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1026 00:43:37.773807   17905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1026 00:43:37.773857   17905 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:37.774509   17905 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1026 00:43:37.774688   17905 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 00:43:37.774722   17905 cni.go:84] Creating CNI manager for ""
	I1026 00:43:37.774781   17905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 00:43:37.774792   17905 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:37.774851   17905 start.go:340] cluster config:
	{Name:download-only-798188 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-798188 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:37.775000   17905 iso.go:125] acquiring lock: {Name:mk4c9915a2f9db13ab3c4fd494f99ff15280961c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 00:43:37.776621   17905 out.go:97] Starting "download-only-798188" primary control-plane node in "download-only-798188" cluster
	I1026 00:43:37.776638   17905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:43:37.880395   17905 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1026 00:43:37.880440   17905 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:37.880614   17905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1026 00:43:37.882329   17905 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1026 00:43:37.882344   17905 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1026 00:43:37.981103   17905 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19868-8680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-798188 host does not exist
	  To start a cluster, run: "minikube start -p download-only-798188"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-798188
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 00:43:54.931890   17615 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-422612 --alsologtostderr --binary-mirror http://127.0.0.1:37063 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-422612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-422612
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (79.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-557358 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-557358 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.91632857s)
helpers_test.go:175: Cleaning up "offline-crio-557358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-557358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-557358: (1.091665549s)
--- PASS: TestOffline (79.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-602145
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-602145: exit status 85 (49.267639ms)

                                                
                                                
-- stdout --
	* Profile "addons-602145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-602145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-602145
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-602145: exit status 85 (51.000926ms)

                                                
                                                
-- stdout --
	* Profile "addons-602145" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-602145"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (161.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-602145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-602145 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.11509589s)
--- PASS: TestAddons/Setup (161.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-602145 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-602145 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-602145 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-602145 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0906784a-c8dd-47c4-a4ba-aab93d9d7b86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0906784a-c8dd-47c4-a4ba-aab93d9d7b86] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004290207s
addons_test.go:633: (dbg) Run:  kubectl --context addons-602145 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-602145 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-602145 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.248591ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pgk2s" [7960692c-0aab-43a0-89c7-aca8e7b3647f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002959698s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l5dxz" [d343ebc6-cfcc-44d1-974f-3bb153afc92e] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003577066s
addons_test.go:331: (dbg) Run:  kubectl --context addons-602145 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-602145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-602145 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.192015174s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 ip
2024/10/26 00:47:14 [DEBUG] GET http://192.168.39.207:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p59xx" [f77a620a-5896-4f35-84c0-f440a62a6d76] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003831783s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable inspektor-gadget --alsologtostderr -v=1: (5.985963872s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 00:47:32.028110   17615 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 00:47:32.032938   17615 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 00:47:32.032968   17615 kapi.go:107] duration metric: took 4.863544ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.87428ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-602145 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-602145 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3eaca10a-6c3a-46fc-b340-341ef27fe093] Pending
helpers_test.go:344: "task-pv-pod" [3eaca10a-6c3a-46fc-b340-341ef27fe093] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3eaca10a-6c3a-46fc-b340-341ef27fe093] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004044166s
addons_test.go:511: (dbg) Run:  kubectl --context addons-602145 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-602145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-602145 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-602145 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-602145 delete pod task-pv-pod: (1.181659698s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-602145 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-602145 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-602145 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1e1b66b2-0ebb-466b-b1e0-c1f43ef21b9d] Pending
helpers_test.go:344: "task-pv-pod-restore" [1e1b66b2-0ebb-466b-b1e0-c1f43ef21b9d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1e1b66b2-0ebb-466b-b1e0-c1f43ef21b9d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004573653s
addons_test.go:553: (dbg) Run:  kubectl --context addons-602145 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-602145 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-602145 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.709036801s)
--- PASS: TestAddons/parallel/CSI (56.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-602145 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-l2jb9" [6d840d52-d6d6-41d0-9956-e617ec1ca044] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-l2jb9" [6d840d52-d6d6-41d0-9956-e617ec1ca044] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-l2jb9" [6d840d52-d6d6-41d0-9956-e617ec1ca044] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00423813s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable headlamp --alsologtostderr -v=1: (5.840068968s)
--- PASS: TestAddons/parallel/Headlamp (21.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-5g7qn" [01134176-8115-4cfa-974a-01dfb4fd3e59] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004272295s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-602145 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-602145 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [56f54b09-a50e-4c9f-87f3-279a0fe2c20e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [56f54b09-a50e-4c9f-87f3-279a0fe2c20e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [56f54b09-a50e-4c9f-87f3-279a0fe2c20e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004609929s
addons_test.go:906: (dbg) Run:  kubectl --context addons-602145 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 ssh "cat /opt/local-path-provisioner/pvc-323584fd-5eeb-4dce-983c-67e6333a4dfe_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-602145 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-602145 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.128235627s)
--- PASS: TestAddons/parallel/LocalPath (54.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-njbmm" [d10ea740-696c-405e-abda-87f78aad39bb] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004896976s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lg4kc" [48a276fc-554f-4dd6-bca2-dba45d86f015] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004227316s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-602145 addons disable yakd --alsologtostderr -v=1: (5.692256758s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestCertOptions (50.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-197478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1026 01:53:36.030710   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:53:52.961578   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-197478 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.954956442s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-197478 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-197478 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-197478 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-197478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-197478
--- PASS: TestCertOptions (50.17s)

                                                
                                    
x
+
TestCertExpiration (238.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-999717 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-999717 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (39.628272398s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-999717 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-999717 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (17.973488347s)
helpers_test.go:175: Cleaning up "cert-expiration-999717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-999717
--- PASS: TestCertExpiration (238.40s)

                                                
                                    
x
+
TestForceSystemdFlag (55.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-831448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-831448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.391099194s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-831448 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-831448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-831448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-831448: (1.022600028s)
--- PASS: TestForceSystemdFlag (55.68s)

                                                
                                    
x
+
TestForceSystemdEnv (89.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-933025 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-933025 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m27.997385479s)
helpers_test.go:175: Cleaning up "force-systemd-env-933025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-933025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-933025: (1.029451562s)
--- PASS: TestForceSystemdEnv (89.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1026 01:54:33.668248   17615 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1026 01:54:33.668386   17615 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1026 01:54:33.704789   17615 install.go:62] docker-machine-driver-kvm2: exit status 1
W1026 01:54:33.705178   17615 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1026 01:54:33.705242   17615 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172277328/001/docker-machine-driver-kvm2
I1026 01:54:33.898043   17615 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3172277328/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000788438 gz:0xc000788540 tar:0xc000788470 tar.bz2:0xc000788480 tar.gz:0xc0007884a0 tar.xz:0xc0007884b0 tar.zst:0xc0007884f0 tbz2:0xc000788480 tgz:0xc0007884a0 txz:0xc0007884b0 tzst:0xc0007884f0 xz:0xc000788548 zip:0xc000788580 zst:0xc000788590] Getters:map[file:0xc00167fe20 http:0xc000b92eb0 https:0xc000b92f00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1026 01:54:33.898103   17615 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172277328/001/docker-machine-driver-kvm2
I1026 01:54:36.445619   17615 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1026 01:54:36.445724   17615 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1026 01:54:36.473655   17615 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1026 01:54:36.473686   17615 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1026 01:54:36.473759   17615 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1026 01:54:36.473792   17615 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172277328/002/docker-machine-driver-kvm2
I1026 01:54:36.526570   17615 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3172277328/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000788438 gz:0xc000788540 tar:0xc000788470 tar.bz2:0xc000788480 tar.gz:0xc0007884a0 tar.xz:0xc0007884b0 tar.zst:0xc0007884f0 tbz2:0xc000788480 tgz:0xc0007884a0 txz:0xc0007884b0 tzst:0xc0007884f0 xz:0xc000788548 zip:0xc000788580 zst:0xc000788590] Getters:map[file:0xc001678e70 http:0xc00014fae0 https:0xc00014fb30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1026 01:54:36.526620   17615 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172277328/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.43s)

                                                
                                    
x
+
TestErrorSpam/setup (40.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-588402 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-588402 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-588402 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-588402 --driver=kvm2  --container-runtime=crio: (40.539247092s)
--- PASS: TestErrorSpam/setup (40.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (5.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop: (2.312327845s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop: (1.439247014s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-588402 --log_dir /tmp/nospam-588402 stop: (1.54039652s)
--- PASS: TestErrorSpam/stop (5.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19868-8680/.minikube/files/etc/test/nested/copy/17615/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1026 00:56:37.285620   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.291996   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.303334   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.324737   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.366166   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.447634   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.609163   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:37.930974   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:38.572991   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:39.854598   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:42.417506   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:47.538773   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:56:57.780757   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:57:18.262482   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-335050 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m19.744721429s)
--- PASS: TestFunctional/serial/StartWithProxy (79.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 00:57:29.441702   17615 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --alsologtostderr -v=8
E1026 00:57:59.224363   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-335050 --alsologtostderr -v=8: (33.48142775s)
functional_test.go:663: soft start took 33.482074541s for "functional-335050" cluster.
I1026 00:58:02.923425   17615 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (33.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-335050 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:3.1: (1.305691988s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:3.3: (1.283260749s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 cache add registry.k8s.io/pause:latest: (1.323482401s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-335050 /tmp/TestFunctionalserialCacheCmdcacheadd_local2358796507/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache add minikube-local-cache-test:functional-335050
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 cache add minikube-local-cache-test:functional-335050: (1.773088753s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache delete minikube-local-cache-test:functional-335050
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-335050
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.433864ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 cache reload: (1.062547042s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 kubectl -- --context functional-335050 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-335050 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-335050 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.833307007s)
functional_test.go:761: restart took 34.83340644s for "functional-335050" cluster.
I1026 00:58:46.212775   17615 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-335050 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 logs: (1.326317459s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 logs --file /tmp/TestFunctionalserialLogsFileCmd289554567/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 logs --file /tmp/TestFunctionalserialLogsFileCmd289554567/001/logs.txt: (1.344056001s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-335050 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-335050
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-335050: exit status 115 (263.972014ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.146:31400 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-335050 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 config get cpus: exit status 14 (62.146798ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 config get cpus: exit status 14 (49.351349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-335050 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-335050 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27105: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-335050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.534997ms)

                                                
                                                
-- stdout --
	* [functional-335050] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 00:59:09.086152   26934 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:09.086293   26934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:09.086303   26934 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:09.086308   26934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:09.086528   26934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:09.087110   26934 out.go:352] Setting JSON to false
	I1026 00:59:09.088238   26934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2489,"bootTime":1729901860,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:09.088342   26934 start.go:139] virtualization: kvm guest
	I1026 00:59:09.090429   26934 out.go:177] * [functional-335050] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:09.091859   26934 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:09.091886   26934 notify.go:220] Checking for updates...
	I1026 00:59:09.094528   26934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:09.096090   26934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:09.097482   26934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:09.098801   26934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:09.100189   26934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:09.101810   26934 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:59:09.102198   26934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:09.102247   26934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:09.117169   26934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I1026 00:59:09.117648   26934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:09.118233   26934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:09.118256   26934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:09.118617   26934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:09.118801   26934 main.go:141] libmachine: (functional-335050) Calling .DriverName
	I1026 00:59:09.119054   26934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:09.119440   26934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:09.119480   26934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:09.138547   26934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1026 00:59:09.139002   26934 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:09.139548   26934 main.go:141] libmachine: Using API Version  1
	I1026 00:59:09.139576   26934 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:09.139885   26934 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:09.140088   26934 main.go:141] libmachine: (functional-335050) Calling .DriverName
	I1026 00:59:09.172625   26934 out.go:177] * Using the kvm2 driver based on existing profile
	I1026 00:59:09.173549   26934 start.go:297] selected driver: kvm2
	I1026 00:59:09.173573   26934 start.go:901] validating driver "kvm2" against &{Name:functional-335050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-335050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:09.173715   26934 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:09.175814   26934 out.go:201] 
	W1026 00:59:09.176992   26934 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 00:59:09.178136   26934 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-335050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-335050 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.456458ms)

                                                
                                                
-- stdout --
	* [functional-335050] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 00:59:22.946676   27400 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:59:22.946771   27400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:22.946779   27400 out.go:358] Setting ErrFile to fd 2...
	I1026 00:59:22.946783   27400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:59:22.947042   27400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 00:59:22.947533   27400 out.go:352] Setting JSON to false
	I1026 00:59:22.948357   27400 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2503,"bootTime":1729901860,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 00:59:22.948448   27400 start.go:139] virtualization: kvm guest
	I1026 00:59:22.950800   27400 out.go:177] * [functional-335050] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1026 00:59:22.952223   27400 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:59:22.952228   27400 notify.go:220] Checking for updates...
	I1026 00:59:22.954748   27400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:59:22.956101   27400 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 00:59:22.957329   27400 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 00:59:22.958533   27400 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 00:59:22.959741   27400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:59:22.961261   27400 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 00:59:22.961687   27400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:22.961768   27400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:22.976147   27400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I1026 00:59:22.976652   27400 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:22.977287   27400 main.go:141] libmachine: Using API Version  1
	I1026 00:59:22.977320   27400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:22.977651   27400 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:22.977859   27400 main.go:141] libmachine: (functional-335050) Calling .DriverName
	I1026 00:59:22.978127   27400 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:59:22.978532   27400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 00:59:22.978632   27400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 00:59:22.993016   27400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I1026 00:59:22.993462   27400 main.go:141] libmachine: () Calling .GetVersion
	I1026 00:59:22.993984   27400 main.go:141] libmachine: Using API Version  1
	I1026 00:59:22.994013   27400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 00:59:22.994294   27400 main.go:141] libmachine: () Calling .GetMachineName
	I1026 00:59:22.994462   27400 main.go:141] libmachine: (functional-335050) Calling .DriverName
	I1026 00:59:23.026361   27400 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1026 00:59:23.027990   27400 start.go:297] selected driver: kvm2
	I1026 00:59:23.028005   27400 start.go:901] validating driver "kvm2" against &{Name:functional-335050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-335050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:59:23.028131   27400 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:59:23.030381   27400 out.go:201] 
	W1026 00:59:23.031786   27400 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 00:59:23.033221   27400 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-335050 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-335050 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dr8s2" [eab46195-651d-4fbe-9e4e-c7c9b46dfd35] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dr8s2" [eab46195-651d-4fbe-9e4e-c7c9b46dfd35] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.002715798s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.146:31416
functional_test.go:1675: http://192.168.39.146:31416: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-dr8s2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.146:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.146:31416
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b0ecd350-cb11-47a1-bf8f-598074b3bb89] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005216295s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-335050 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-335050 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-335050 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-335050 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [27cd4e22-e4a9-4dcc-a1bd-301c238bdad1] Pending
helpers_test.go:344: "sp-pod" [27cd4e22-e4a9-4dcc-a1bd-301c238bdad1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [27cd4e22-e4a9-4dcc-a1bd-301c238bdad1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003963676s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-335050 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-335050 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-335050 delete -f testdata/storage-provisioner/pod.yaml: (1.897037003s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-335050 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2cbfc83c-be7b-4b40-b75f-fd392b22edde] Pending
helpers_test.go:344: "sp-pod" [2cbfc83c-be7b-4b40-b75f-fd392b22edde] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1026 00:59:21.146691   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [2cbfc83c-be7b-4b40-b75f-fd392b22edde] Running
2024/10/26 00:59:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004823523s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-335050 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh -n functional-335050 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cp functional-335050:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd668552955/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh -n functional-335050 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh -n functional-335050 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-335050 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-69rjm" [e03d2b79-2080-47e5-b777-eeebbd311510] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-69rjm" [e03d2b79-2080-47e5-b777-eeebbd311510] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.011101451s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-335050 exec mysql-6cdb49bbb-69rjm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-335050 exec mysql-6cdb49bbb-69rjm -- mysql -ppassword -e "show databases;": exit status 1 (197.728408ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 00:59:28.675828   17615 retry.go:31] will retry after 618.582927ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-335050 exec mysql-6cdb49bbb-69rjm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-335050 exec mysql-6cdb49bbb-69rjm -- mysql -ppassword -e "show databases;": exit status 1 (175.030617ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 00:59:29.470615   17615 retry.go:31] will retry after 1.011211624s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-335050 exec mysql-6cdb49bbb-69rjm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17615/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /etc/test/nested/copy/17615/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17615.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /etc/ssl/certs/17615.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17615.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /usr/share/ca-certificates/17615.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/176152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /etc/ssl/certs/176152.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/176152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /usr/share/ca-certificates/176152.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-335050 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "sudo systemctl is-active docker": exit status 1 (230.495669ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "sudo systemctl is-active containerd": exit status 1 (306.879016ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-335050 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-335050 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-clfrl" [c2c53a7d-4e7d-4a15-8856-f71155d269f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-clfrl" [c2c53a7d-4e7d-4a15-8856-f71155d269f6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003297873s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "285.376449ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.870323ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "304.132691ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.798795ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdany-port180009000/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1729904336854935912" to /tmp/TestFunctionalparallelMountCmdany-port180009000/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1729904336854935912" to /tmp/TestFunctionalparallelMountCmdany-port180009000/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1729904336854935912" to /tmp/TestFunctionalparallelMountCmdany-port180009000/001/test-1729904336854935912
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.218053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 00:58:57.084493   17615 retry.go:31] will retry after 499.276181ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 00:58 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 00:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 00:58 test-1729904336854935912
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh cat /mount-9p/test-1729904336854935912
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-335050 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6d6cad34-b0f9-4b08-9584-8f36815afa2c] Pending
helpers_test.go:344: "busybox-mount" [6d6cad34-b0f9-4b08-9584-8f36815afa2c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6d6cad34-b0f9-4b08-9584-8f36815afa2c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6d6cad34-b0f9-4b08-9584-8f36815afa2c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004011229s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-335050 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdany-port180009000/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service list -o json
functional_test.go:1494: Took "238.381046ms" to run "out/minikube-linux-amd64 -p functional-335050 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.146:31150
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.146:31150
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-335050 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-335050
localhost/kicbase/echo-server:functional-335050
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-335050 image ls --format short --alsologtostderr:
I1026 00:59:24.087178   27552 out.go:345] Setting OutFile to fd 1 ...
I1026 00:59:24.087285   27552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.087294   27552 out.go:358] Setting ErrFile to fd 2...
I1026 00:59:24.087298   27552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.087478   27552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
I1026 00:59:24.088012   27552 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.088120   27552 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.088433   27552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.088474   27552 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.103611   27552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
I1026 00:59:24.104100   27552 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.104891   27552 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.104976   27552 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.105325   27552 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.105543   27552 main.go:141] libmachine: (functional-335050) Calling .GetState
I1026 00:59:24.107590   27552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.107641   27552 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.122293   27552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
I1026 00:59:24.122759   27552 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.123200   27552 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.123230   27552 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.123490   27552 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.123657   27552 main.go:141] libmachine: (functional-335050) Calling .DriverName
I1026 00:59:24.123881   27552 ssh_runner.go:195] Run: systemctl --version
I1026 00:59:24.123908   27552 main.go:141] libmachine: (functional-335050) Calling .GetSSHHostname
I1026 00:59:24.126285   27552 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.126623   27552 main.go:141] libmachine: (functional-335050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:6f:bb", ip: ""} in network mk-functional-335050: {Iface:virbr1 ExpiryTime:2024-10-26 01:56:23 +0000 UTC Type:0 Mac:52:54:00:46:6f:bb Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:functional-335050 Clientid:01:52:54:00:46:6f:bb}
I1026 00:59:24.126648   27552 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined IP address 192.168.39.146 and MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.126742   27552 main.go:141] libmachine: (functional-335050) Calling .GetSSHPort
I1026 00:59:24.126887   27552 main.go:141] libmachine: (functional-335050) Calling .GetSSHKeyPath
I1026 00:59:24.127031   27552 main.go:141] libmachine: (functional-335050) Calling .GetSSHUsername
I1026 00:59:24.127164   27552 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/functional-335050/id_rsa Username:docker}
I1026 00:59:24.203995   27552 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 00:59:24.255781   27552 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.255799   27552 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.256049   27552 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.256065   27552 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:24.256070   27552 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:24.256108   27552 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.256120   27552 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.256319   27552 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:24.256341   27552 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.256364   27552 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-335050 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-335050  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-335050  | 429ea14b1612c | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-335050 image ls --format table --alsologtostderr:
I1026 00:59:24.805511   27625 out.go:345] Setting OutFile to fd 1 ...
I1026 00:59:24.805620   27625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.805628   27625 out.go:358] Setting ErrFile to fd 2...
I1026 00:59:24.805633   27625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.805801   27625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
I1026 00:59:24.806386   27625 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.806477   27625 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.806820   27625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.806864   27625 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.821470   27625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
I1026 00:59:24.821894   27625 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.822429   27625 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.822458   27625 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.822771   27625 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.822955   27625 main.go:141] libmachine: (functional-335050) Calling .GetState
I1026 00:59:24.824706   27625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.824753   27625 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.839132   27625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
I1026 00:59:24.839517   27625 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.840091   27625 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.840119   27625 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.840434   27625 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.840611   27625 main.go:141] libmachine: (functional-335050) Calling .DriverName
I1026 00:59:24.840799   27625 ssh_runner.go:195] Run: systemctl --version
I1026 00:59:24.840826   27625 main.go:141] libmachine: (functional-335050) Calling .GetSSHHostname
I1026 00:59:24.843437   27625 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.843824   27625 main.go:141] libmachine: (functional-335050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:6f:bb", ip: ""} in network mk-functional-335050: {Iface:virbr1 ExpiryTime:2024-10-26 01:56:23 +0000 UTC Type:0 Mac:52:54:00:46:6f:bb Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:functional-335050 Clientid:01:52:54:00:46:6f:bb}
I1026 00:59:24.843860   27625 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined IP address 192.168.39.146 and MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.843973   27625 main.go:141] libmachine: (functional-335050) Calling .GetSSHPort
I1026 00:59:24.844124   27625 main.go:141] libmachine: (functional-335050) Calling .GetSSHKeyPath
I1026 00:59:24.844267   27625 main.go:141] libmachine: (functional-335050) Calling .GetSSHUsername
I1026 00:59:24.844402   27625 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/functional-335050/id_rsa Username:docker}
I1026 00:59:24.925591   27625 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 00:59:25.014077   27625 main.go:141] libmachine: Making call to close driver server
I1026 00:59:25.014098   27625 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:25.014394   27625 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:25.014412   27625 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:25.014426   27625 main.go:141] libmachine: Making call to close driver server
I1026 00:59:25.014434   27625 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:25.014678   27625 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:25.014711   27625 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:25.014732   27625 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-335050 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162
074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-335050"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.
io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5
aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"429ea14b1612ccfc40fe0510a7262c26eaa3ec68b1c3a64f0004b2b5af3abf0e","repoDigests":["localhost/minikube-local-cache-test@sha256:5ffc17b3fed8cc52791c10799c86f2acc71845c85aa601f407b56f6498e0dcac"],"repoTags":["localhost/minikube-local-cache-test:functional-335050"],"size":"3330"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935
a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-335050 image ls --format json --alsologtostderr:
I1026 00:59:24.562474   27600 out.go:345] Setting OutFile to fd 1 ...
I1026 00:59:24.562602   27600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.562612   27600 out.go:358] Setting ErrFile to fd 2...
I1026 00:59:24.562625   27600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.562832   27600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
I1026 00:59:24.563384   27600 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.563476   27600 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.563831   27600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.563874   27600 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.578197   27600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
I1026 00:59:24.578680   27600 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.579236   27600 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.579259   27600 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.579645   27600 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.579871   27600 main.go:141] libmachine: (functional-335050) Calling .GetState
I1026 00:59:24.581737   27600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.581774   27600 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.597161   27600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
I1026 00:59:24.597674   27600 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.598142   27600 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.598165   27600 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.598497   27600 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.598695   27600 main.go:141] libmachine: (functional-335050) Calling .DriverName
I1026 00:59:24.598894   27600 ssh_runner.go:195] Run: systemctl --version
I1026 00:59:24.598924   27600 main.go:141] libmachine: (functional-335050) Calling .GetSSHHostname
I1026 00:59:24.601694   27600 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.602051   27600 main.go:141] libmachine: (functional-335050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:6f:bb", ip: ""} in network mk-functional-335050: {Iface:virbr1 ExpiryTime:2024-10-26 01:56:23 +0000 UTC Type:0 Mac:52:54:00:46:6f:bb Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:functional-335050 Clientid:01:52:54:00:46:6f:bb}
I1026 00:59:24.602091   27600 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined IP address 192.168.39.146 and MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.602238   27600 main.go:141] libmachine: (functional-335050) Calling .GetSSHPort
I1026 00:59:24.602415   27600 main.go:141] libmachine: (functional-335050) Calling .GetSSHKeyPath
I1026 00:59:24.602556   27600 main.go:141] libmachine: (functional-335050) Calling .GetSSHUsername
I1026 00:59:24.602710   27600 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/functional-335050/id_rsa Username:docker}
I1026 00:59:24.700342   27600 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 00:59:24.756346   27600 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.756360   27600 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.756596   27600 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:24.756611   27600 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.756621   27600 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:24.756628   27600 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.756638   27600 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.756899   27600 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:24.756893   27600 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.756934   27600 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-335050 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-335050
size: "4943877"
- id: 429ea14b1612ccfc40fe0510a7262c26eaa3ec68b1c3a64f0004b2b5af3abf0e
repoDigests:
- localhost/minikube-local-cache-test@sha256:5ffc17b3fed8cc52791c10799c86f2acc71845c85aa601f407b56f6498e0dcac
repoTags:
- localhost/minikube-local-cache-test:functional-335050
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-335050 image ls --format yaml --alsologtostderr:
I1026 00:59:24.306403   27576 out.go:345] Setting OutFile to fd 1 ...
I1026 00:59:24.306520   27576 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.306528   27576 out.go:358] Setting ErrFile to fd 2...
I1026 00:59:24.306532   27576 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:24.306731   27576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
I1026 00:59:24.307394   27576 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.307493   27576 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:24.307827   27576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.307869   27576 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.322368   27576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
I1026 00:59:24.322887   27576 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.323470   27576 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.323495   27576 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.323835   27576 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.324007   27576 main.go:141] libmachine: (functional-335050) Calling .GetState
I1026 00:59:24.325966   27576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:24.326011   27576 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:24.340298   27576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
I1026 00:59:24.340764   27576 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:24.341281   27576 main.go:141] libmachine: Using API Version  1
I1026 00:59:24.341302   27576 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:24.341702   27576 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:24.341945   27576 main.go:141] libmachine: (functional-335050) Calling .DriverName
I1026 00:59:24.342223   27576 ssh_runner.go:195] Run: systemctl --version
I1026 00:59:24.342273   27576 main.go:141] libmachine: (functional-335050) Calling .GetSSHHostname
I1026 00:59:24.345377   27576 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.345809   27576 main.go:141] libmachine: (functional-335050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:6f:bb", ip: ""} in network mk-functional-335050: {Iface:virbr1 ExpiryTime:2024-10-26 01:56:23 +0000 UTC Type:0 Mac:52:54:00:46:6f:bb Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:functional-335050 Clientid:01:52:54:00:46:6f:bb}
I1026 00:59:24.345840   27576 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined IP address 192.168.39.146 and MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:24.345975   27576 main.go:141] libmachine: (functional-335050) Calling .GetSSHPort
I1026 00:59:24.346132   27576 main.go:141] libmachine: (functional-335050) Calling .GetSSHKeyPath
I1026 00:59:24.346272   27576 main.go:141] libmachine: (functional-335050) Calling .GetSSHUsername
I1026 00:59:24.346418   27576 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/functional-335050/id_rsa Username:docker}
I1026 00:59:24.439404   27576 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 00:59:24.513175   27576 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.513189   27576 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.513438   27576 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:24.513460   27576 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.513474   27576 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:24.513486   27576 main.go:141] libmachine: Making call to close driver server
I1026 00:59:24.513493   27576 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:24.513687   27576 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:24.513703   27576 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:24.513760   27576 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh pgrep buildkitd: exit status 1 (245.177635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image build -t localhost/my-image:functional-335050 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 image build -t localhost/my-image:functional-335050 testdata/build --alsologtostderr: (5.196874828s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-335050 image build -t localhost/my-image:functional-335050 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ec2f6a8db94
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-335050
--> 0f9e4c2aa0d
Successfully tagged localhost/my-image:functional-335050
0f9e4c2aa0dec439ee9fa3c0da4a6bd1a2c1f932aa053a1c5fa1b821e1bf31db
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-335050 image build -t localhost/my-image:functional-335050 testdata/build --alsologtostderr:
I1026 00:59:25.311693   27678 out.go:345] Setting OutFile to fd 1 ...
I1026 00:59:25.312036   27678 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:25.312048   27678 out.go:358] Setting ErrFile to fd 2...
I1026 00:59:25.312055   27678 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:59:25.312397   27678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
I1026 00:59:25.313262   27678 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:25.313878   27678 config.go:182] Loaded profile config "functional-335050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1026 00:59:25.314267   27678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:25.314319   27678 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:25.329395   27678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
I1026 00:59:25.329931   27678 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:25.330569   27678 main.go:141] libmachine: Using API Version  1
I1026 00:59:25.330590   27678 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:25.330979   27678 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:25.331190   27678 main.go:141] libmachine: (functional-335050) Calling .GetState
I1026 00:59:25.333043   27678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1026 00:59:25.333075   27678 main.go:141] libmachine: Launching plugin server for driver kvm2
I1026 00:59:25.347665   27678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
I1026 00:59:25.348218   27678 main.go:141] libmachine: () Calling .GetVersion
I1026 00:59:25.348828   27678 main.go:141] libmachine: Using API Version  1
I1026 00:59:25.348851   27678 main.go:141] libmachine: () Calling .SetConfigRaw
I1026 00:59:25.349216   27678 main.go:141] libmachine: () Calling .GetMachineName
I1026 00:59:25.349398   27678 main.go:141] libmachine: (functional-335050) Calling .DriverName
I1026 00:59:25.349628   27678 ssh_runner.go:195] Run: systemctl --version
I1026 00:59:25.349656   27678 main.go:141] libmachine: (functional-335050) Calling .GetSSHHostname
I1026 00:59:25.352199   27678 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:25.352565   27678 main.go:141] libmachine: (functional-335050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:6f:bb", ip: ""} in network mk-functional-335050: {Iface:virbr1 ExpiryTime:2024-10-26 01:56:23 +0000 UTC Type:0 Mac:52:54:00:46:6f:bb Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:functional-335050 Clientid:01:52:54:00:46:6f:bb}
I1026 00:59:25.352608   27678 main.go:141] libmachine: (functional-335050) DBG | domain functional-335050 has defined IP address 192.168.39.146 and MAC address 52:54:00:46:6f:bb in network mk-functional-335050
I1026 00:59:25.352717   27678 main.go:141] libmachine: (functional-335050) Calling .GetSSHPort
I1026 00:59:25.352880   27678 main.go:141] libmachine: (functional-335050) Calling .GetSSHKeyPath
I1026 00:59:25.353020   27678 main.go:141] libmachine: (functional-335050) Calling .GetSSHUsername
I1026 00:59:25.353137   27678 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/functional-335050/id_rsa Username:docker}
I1026 00:59:25.472837   27678 build_images.go:161] Building image from path: /tmp/build.3927037175.tar
I1026 00:59:25.472911   27678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 00:59:25.493673   27678 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3927037175.tar
I1026 00:59:25.497939   27678 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3927037175.tar: stat -c "%s %y" /var/lib/minikube/build/build.3927037175.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3927037175.tar': No such file or directory
I1026 00:59:25.497971   27678 ssh_runner.go:362] scp /tmp/build.3927037175.tar --> /var/lib/minikube/build/build.3927037175.tar (3072 bytes)
I1026 00:59:25.601577   27678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3927037175
I1026 00:59:25.628419   27678 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3927037175 -xf /var/lib/minikube/build/build.3927037175.tar
I1026 00:59:25.637841   27678 crio.go:315] Building image: /var/lib/minikube/build/build.3927037175
I1026 00:59:25.637900   27678 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-335050 /var/lib/minikube/build/build.3927037175 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 00:59:30.437052   27678 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-335050 /var/lib/minikube/build/build.3927037175 --cgroup-manager=cgroupfs: (4.799108477s)
I1026 00:59:30.437126   27678 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3927037175
I1026 00:59:30.448385   27678 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3927037175.tar
I1026 00:59:30.457204   27678 build_images.go:217] Built localhost/my-image:functional-335050 from /tmp/build.3927037175.tar
I1026 00:59:30.457239   27678 build_images.go:133] succeeded building to: functional-335050
I1026 00:59:30.457244   27678 build_images.go:134] failed building to: 
I1026 00:59:30.457267   27678 main.go:141] libmachine: Making call to close driver server
I1026 00:59:30.457280   27678 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:30.457655   27678 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:30.457669   27678 main.go:141] libmachine: (functional-335050) DBG | Closing plugin on server side
I1026 00:59:30.457673   27678 main.go:141] libmachine: Making call to close connection to plugin binary
I1026 00:59:30.457683   27678 main.go:141] libmachine: Making call to close driver server
I1026 00:59:30.457690   27678 main.go:141] libmachine: (functional-335050) Calling .Close
I1026 00:59:30.457886   27678 main.go:141] libmachine: Successfully made call to close driver server
I1026 00:59:30.457899   27678 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.724827273s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-335050
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdspecific-port281369153/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.031928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 00:59:05.736366   17615 retry.go:31] will retry after 591.509933ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdspecific-port281369153/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "sudo umount -f /mount-9p": exit status 1 (249.099799ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-335050 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdspecific-port281369153/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image load --daemon kicbase/echo-server:functional-335050 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 image load --daemon kicbase/echo-server:functional-335050 --alsologtostderr: (3.216918052s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T" /mount1: exit status 1 (252.331866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 00:59:07.822368   17615 retry.go:31] will retry after 355.507029ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-335050 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-335050 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1506282017/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image load --daemon kicbase/echo-server:functional-335050 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-335050
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image load --daemon kicbase/echo-server:functional-335050 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image save kicbase/echo-server:functional-335050 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image rm kicbase/echo-server:functional-335050 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 image rm kicbase/echo-server:functional-335050 --alsologtostderr: (1.171823224s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-335050 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (6.971484058s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-335050
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-335050 image save --daemon kicbase/echo-server:functional-335050 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-335050
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-335050
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-335050
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-335050
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-300623 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 01:01:37.284140   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:02:04.988637   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-300623 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.437220895s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-300623 -- rollout status deployment/busybox: (4.601885474s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-mbn94 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-qtdcl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-x8rtl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-mbn94 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-qtdcl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-x8rtl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-mbn94 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-qtdcl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-x8rtl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-mbn94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-mbn94 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-qtdcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-qtdcl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-x8rtl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-300623 -- exec busybox-7dff88458-x8rtl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-300623 -v=7 --alsologtostderr
E1026 01:03:52.961759   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:52.968144   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:52.979548   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:53.000988   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:53.042432   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:53.123932   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:53.285766   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:53.607929   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:54.249865   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:55.531772   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:03:58.093394   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-300623 -v=7 --alsologtostderr: (55.980423389s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-300623 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp testdata/cp-test.txt ha-300623:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623:/home/docker/cp-test.txt ha-300623-m02:/home/docker/cp-test_ha-300623_ha-300623-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test_ha-300623_ha-300623-m02.txt"
E1026 01:04:03.214930   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623:/home/docker/cp-test.txt ha-300623-m03:/home/docker/cp-test_ha-300623_ha-300623-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test_ha-300623_ha-300623-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623:/home/docker/cp-test.txt ha-300623-m04:/home/docker/cp-test_ha-300623_ha-300623-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test_ha-300623_ha-300623-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp testdata/cp-test.txt ha-300623-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m02:/home/docker/cp-test.txt ha-300623:/home/docker/cp-test_ha-300623-m02_ha-300623.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test_ha-300623-m02_ha-300623.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m02:/home/docker/cp-test.txt ha-300623-m03:/home/docker/cp-test_ha-300623-m02_ha-300623-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test_ha-300623-m02_ha-300623-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m02:/home/docker/cp-test.txt ha-300623-m04:/home/docker/cp-test_ha-300623-m02_ha-300623-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test_ha-300623-m02_ha-300623-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp testdata/cp-test.txt ha-300623-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt ha-300623:/home/docker/cp-test_ha-300623-m03_ha-300623.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test_ha-300623-m03_ha-300623.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt ha-300623-m02:/home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test_ha-300623-m03_ha-300623-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m03:/home/docker/cp-test.txt ha-300623-m04:/home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test_ha-300623-m03_ha-300623-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp testdata/cp-test.txt ha-300623-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2355760230/001/cp-test_ha-300623-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt ha-300623:/home/docker/cp-test_ha-300623-m04_ha-300623.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623 "sudo cat /home/docker/cp-test_ha-300623-m04_ha-300623.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt ha-300623-m02:/home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m02 "sudo cat /home/docker/cp-test_ha-300623-m04_ha-300623-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 cp ha-300623-m04:/home/docker/cp-test.txt ha-300623-m03:/home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m04 "sudo cat /home/docker/cp-test.txt"
E1026 01:04:13.456613   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 ssh -n ha-300623-m03 "sudo cat /home/docker/cp-test_ha-300623-m04_ha-300623-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 node delete m03 -v=7 --alsologtostderr
E1026 01:13:52.961593   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-300623 node delete m03 -v=7 --alsologtostderr: (15.892669562s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (327.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-300623 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 01:16:37.285146   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:18:52.966022   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:20:16.027001   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:21:37.284295   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-300623 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m27.028077923s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (327.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-300623 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-300623 --control-plane -v=7 --alsologtostderr: (1m15.266313851s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-300623 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-659870 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1026 01:23:52.966504   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-659870 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.172512806s)
--- PASS: TestJSONOutput/start/Command (84.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-659870 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-659870 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.54s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-659870 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-659870 --output=json --user=testUser: (6.540385594s)
--- PASS: TestJSONOutput/stop/Command (6.54s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-758284 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-758284 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.413256ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82182351-8548-47d9-9d08-de0284b90e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-758284] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a66585ae-fec8-4dfe-b48b-52d3e3b11f05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"664d74b2-d615-49f7-bcb1-e019df07e6a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e586f0d-cf66-49e5-b726-2acfae1adac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig"}}
	{"specversion":"1.0","id":"700d0f7e-5676-44d7-8b05-86579d863822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube"}}
	{"specversion":"1.0","id":"9504c72e-f10e-4d8f-a9ac-7c7884494b65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9c694ed2-bb43-40ac-a1a3-83edfb6f66f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f2da119-add1-4d4a-9f45-73ecc6c0490a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-758284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-758284
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-140733 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-140733 --driver=kvm2  --container-runtime=crio: (42.426450022s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-151024 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-151024 --driver=kvm2  --container-runtime=crio: (43.206603809s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-140733
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-151024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-151024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-151024
helpers_test.go:175: Cleaning up "first-140733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-140733
--- PASS: TestMinikubeProfile (88.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-355913 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1026 01:26:37.285609   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-355913 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.094506177s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-355913 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-355913 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-368703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-368703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.573148146s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-355913 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-368703
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-368703: (1.277708807s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-368703
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-368703: (22.551388451s)
--- PASS: TestMountStart/serial/RestartStopped (23.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-368703 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328488 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 01:28:52.961837   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328488 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.884940878s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-328488 -- rollout status deployment/busybox: (3.80522908s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-r4zfz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-snl6p -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-r4zfz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-snl6p -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-r4zfz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-snl6p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-r4zfz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-r4zfz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-snl6p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-328488 -- exec busybox-7dff88458-snl6p -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328488 -v 3 --alsologtostderr
E1026 01:29:40.351598   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-328488 -v 3 --alsologtostderr: (46.95316707s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-328488 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp testdata/cp-test.txt multinode-328488:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488:/home/docker/cp-test.txt multinode-328488-m02:/home/docker/cp-test_multinode-328488_multinode-328488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test_multinode-328488_multinode-328488-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488:/home/docker/cp-test.txt multinode-328488-m03:/home/docker/cp-test_multinode-328488_multinode-328488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test_multinode-328488_multinode-328488-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp testdata/cp-test.txt multinode-328488-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt multinode-328488:/home/docker/cp-test_multinode-328488-m02_multinode-328488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test_multinode-328488-m02_multinode-328488.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m02:/home/docker/cp-test.txt multinode-328488-m03:/home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test_multinode-328488-m02_multinode-328488-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp testdata/cp-test.txt multinode-328488-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2176224653/001/cp-test_multinode-328488-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt multinode-328488:/home/docker/cp-test_multinode-328488-m03_multinode-328488.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488 "sudo cat /home/docker/cp-test_multinode-328488-m03_multinode-328488.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 cp multinode-328488-m03:/home/docker/cp-test.txt multinode-328488-m02:/home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 ssh -n multinode-328488-m02 "sudo cat /home/docker/cp-test_multinode-328488-m03_multinode-328488-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 node stop m03: (1.410489419s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328488 status: exit status 7 (407.796135ms)

                                                
                                                
-- stdout --
	multinode-328488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328488-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328488-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr: exit status 7 (431.499752ms)

                                                
                                                
-- stdout --
	multinode-328488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328488-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328488-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:30:35.672910   45238 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:30:35.673051   45238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:30:35.673061   45238 out.go:358] Setting ErrFile to fd 2...
	I1026 01:30:35.673068   45238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:30:35.673276   45238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:30:35.673469   45238 out.go:352] Setting JSON to false
	I1026 01:30:35.673502   45238 mustload.go:65] Loading cluster: multinode-328488
	I1026 01:30:35.673645   45238 notify.go:220] Checking for updates...
	I1026 01:30:35.673928   45238 config.go:182] Loaded profile config "multinode-328488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:30:35.673951   45238 status.go:174] checking status of multinode-328488 ...
	I1026 01:30:35.674365   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.674444   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.691541   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I1026 01:30:35.692030   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.692603   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.692623   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.693004   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.693223   45238 main.go:141] libmachine: (multinode-328488) Calling .GetState
	I1026 01:30:35.695029   45238 status.go:371] multinode-328488 host status = "Running" (err=<nil>)
	I1026 01:30:35.695050   45238 host.go:66] Checking if "multinode-328488" exists ...
	I1026 01:30:35.695463   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.695508   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.711515   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I1026 01:30:35.711980   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.712491   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.712514   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.712864   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.713054   45238 main.go:141] libmachine: (multinode-328488) Calling .GetIP
	I1026 01:30:35.715753   45238 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:30:35.716236   45238 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:30:35.716266   45238 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:30:35.716425   45238 host.go:66] Checking if "multinode-328488" exists ...
	I1026 01:30:35.716751   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.716791   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.732049   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I1026 01:30:35.732511   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.733013   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.733036   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.733367   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.733578   45238 main.go:141] libmachine: (multinode-328488) Calling .DriverName
	I1026 01:30:35.733752   45238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:30:35.733797   45238 main.go:141] libmachine: (multinode-328488) Calling .GetSSHHostname
	I1026 01:30:35.736752   45238 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:30:35.737158   45238 main.go:141] libmachine: (multinode-328488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:93:04", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:27:59 +0000 UTC Type:0 Mac:52:54:00:1a:93:04 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-328488 Clientid:01:52:54:00:1a:93:04}
	I1026 01:30:35.737187   45238 main.go:141] libmachine: (multinode-328488) DBG | domain multinode-328488 has defined IP address 192.168.39.35 and MAC address 52:54:00:1a:93:04 in network mk-multinode-328488
	I1026 01:30:35.737321   45238 main.go:141] libmachine: (multinode-328488) Calling .GetSSHPort
	I1026 01:30:35.737518   45238 main.go:141] libmachine: (multinode-328488) Calling .GetSSHKeyPath
	I1026 01:30:35.737725   45238 main.go:141] libmachine: (multinode-328488) Calling .GetSSHUsername
	I1026 01:30:35.737865   45238 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488/id_rsa Username:docker}
	I1026 01:30:35.825567   45238 ssh_runner.go:195] Run: systemctl --version
	I1026 01:30:35.832518   45238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:30:35.849832   45238 kubeconfig.go:125] found "multinode-328488" server: "https://192.168.39.35:8443"
	I1026 01:30:35.849860   45238 api_server.go:166] Checking apiserver status ...
	I1026 01:30:35.849912   45238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:30:35.866307   45238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W1026 01:30:35.878572   45238 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 01:30:35.878625   45238 ssh_runner.go:195] Run: ls
	I1026 01:30:35.883187   45238 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I1026 01:30:35.887520   45238 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I1026 01:30:35.887548   45238 status.go:463] multinode-328488 apiserver status = Running (err=<nil>)
	I1026 01:30:35.887560   45238 status.go:176] multinode-328488 status: &{Name:multinode-328488 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:30:35.887580   45238 status.go:174] checking status of multinode-328488-m02 ...
	I1026 01:30:35.888009   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.888059   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.903661   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I1026 01:30:35.904150   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.904648   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.904672   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.904963   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.905137   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetState
	I1026 01:30:35.906743   45238 status.go:371] multinode-328488-m02 host status = "Running" (err=<nil>)
	I1026 01:30:35.906762   45238 host.go:66] Checking if "multinode-328488-m02" exists ...
	I1026 01:30:35.907052   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.907091   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.922915   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I1026 01:30:35.923324   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.923771   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.923790   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.924128   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.924315   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetIP
	I1026 01:30:35.927412   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | domain multinode-328488-m02 has defined MAC address 52:54:00:0d:9e:69 in network mk-multinode-328488
	I1026 01:30:35.927922   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:9e:69", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:28:56 +0000 UTC Type:0 Mac:52:54:00:0d:9e:69 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-328488-m02 Clientid:01:52:54:00:0d:9e:69}
	I1026 01:30:35.927953   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | domain multinode-328488-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0d:9e:69 in network mk-multinode-328488
	I1026 01:30:35.928067   45238 host.go:66] Checking if "multinode-328488-m02" exists ...
	I1026 01:30:35.928479   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:35.928534   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:35.943882   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I1026 01:30:35.944336   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:35.944820   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:35.944837   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:35.945188   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:35.945364   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .DriverName
	I1026 01:30:35.945559   45238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:30:35.945582   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetSSHHostname
	I1026 01:30:35.947967   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | domain multinode-328488-m02 has defined MAC address 52:54:00:0d:9e:69 in network mk-multinode-328488
	I1026 01:30:35.948458   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:9e:69", ip: ""} in network mk-multinode-328488: {Iface:virbr1 ExpiryTime:2024-10-26 02:28:56 +0000 UTC Type:0 Mac:52:54:00:0d:9e:69 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-328488-m02 Clientid:01:52:54:00:0d:9e:69}
	I1026 01:30:35.948486   45238 main.go:141] libmachine: (multinode-328488-m02) DBG | domain multinode-328488-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:0d:9e:69 in network mk-multinode-328488
	I1026 01:30:35.948671   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetSSHPort
	I1026 01:30:35.948863   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetSSHKeyPath
	I1026 01:30:35.949029   45238 main.go:141] libmachine: (multinode-328488-m02) Calling .GetSSHUsername
	I1026 01:30:35.949168   45238 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19868-8680/.minikube/machines/multinode-328488-m02/id_rsa Username:docker}
	I1026 01:30:36.024486   45238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:30:36.038875   45238 status.go:176] multinode-328488-m02 status: &{Name:multinode-328488-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:30:36.038921   45238 status.go:174] checking status of multinode-328488-m03 ...
	I1026 01:30:36.039307   45238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1026 01:30:36.039355   45238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1026 01:30:36.054723   45238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1026 01:30:36.055222   45238 main.go:141] libmachine: () Calling .GetVersion
	I1026 01:30:36.055657   45238 main.go:141] libmachine: Using API Version  1
	I1026 01:30:36.055679   45238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1026 01:30:36.056045   45238 main.go:141] libmachine: () Calling .GetMachineName
	I1026 01:30:36.056212   45238 main.go:141] libmachine: (multinode-328488-m03) Calling .GetState
	I1026 01:30:36.057741   45238 status.go:371] multinode-328488-m03 host status = "Stopped" (err=<nil>)
	I1026 01:30:36.057754   45238 status.go:384] host is not running, skipping remaining checks
	I1026 01:30:36.057759   45238 status.go:176] multinode-328488-m03 status: &{Name:multinode-328488-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 node start m03 -v=7 --alsologtostderr: (38.865216622s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-328488 node delete m03: (1.478168619s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (198.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328488 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 01:41:37.285200   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328488 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.857597184s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-328488 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (198.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-328488
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328488-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-328488-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.810475ms)

                                                
                                                
-- stdout --
	* [multinode-328488-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-328488-m02' is duplicated with machine name 'multinode-328488-m02' in profile 'multinode-328488'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-328488-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-328488-m03 --driver=kvm2  --container-runtime=crio: (42.445384167s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-328488
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-328488: exit status 80 (217.400053ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-328488 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-328488-m03 already exists in multinode-328488-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-328488-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.76s)

                                                
                                    
x
+
TestScheduledStopUnix (109.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-494893 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-494893 --memory=2048 --driver=kvm2  --container-runtime=crio: (37.591649205s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494893 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-494893 -n scheduled-stop-494893
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494893 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 01:48:21.515132   17615 retry.go:31] will retry after 100.984µs: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.516298   17615 retry.go:31] will retry after 91.056µs: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.517384   17615 retry.go:31] will retry after 118.161µs: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.518523   17615 retry.go:31] will retry after 297.974µs: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.519636   17615 retry.go:31] will retry after 293.022µs: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.520748   17615 retry.go:31] will retry after 1.069284ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.521866   17615 retry.go:31] will retry after 1.513496ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.524072   17615 retry.go:31] will retry after 1.938198ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.526256   17615 retry.go:31] will retry after 3.415672ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.530481   17615 retry.go:31] will retry after 3.6455ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.534672   17615 retry.go:31] will retry after 3.226126ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.538909   17615 retry.go:31] will retry after 8.262582ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.548118   17615 retry.go:31] will retry after 12.503405ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.561373   17615 retry.go:31] will retry after 26.254065ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
I1026 01:48:21.588721   17615 retry.go:31] will retry after 37.681247ms: open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/scheduled-stop-494893/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494893 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494893 -n scheduled-stop-494893
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-494893
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494893 --schedule 15s
E1026 01:48:52.965742   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-494893
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-494893: exit status 7 (64.825386ms)

                                                
                                                
-- stdout --
	scheduled-stop-494893
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494893 -n scheduled-stop-494893
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494893 -n scheduled-stop-494893: exit status 7 (64.001467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-494893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-494893
--- PASS: TestScheduledStopUnix (109.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2385863491 start -p running-upgrade-061004 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2385863491 start -p running-upgrade-061004 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m23.502802175s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-061004 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-061004 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.582354462s)
helpers_test.go:175: Cleaning up "running-upgrade-061004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-061004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-061004: (1.14911611s)
--- PASS: TestRunningBinaryUpgrade (174.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.121927ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-694381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (112.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694381 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694381 --driver=kvm2  --container-runtime=crio: (1m52.729067355s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-694381 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (112.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1026 01:51:37.284884   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --driver=kvm2  --container-runtime=crio: (17.529196952s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-694381 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-694381 status -o json: exit status 2 (254.659312ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-694381","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-694381
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-694381: (1.072662028s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694381 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.053614808s)
--- PASS: TestNoKubernetes/serial/Start (40.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-694381 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-694381 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.533425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-694381
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-694381: (1.304278779s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694381 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694381 --driver=kvm2  --container-runtime=crio: (46.176598419s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-694381 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-694381 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.643991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.745403128 start -p stopped-upgrade-300387 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.745403128 start -p stopped-upgrade-300387 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.221379667s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.745403128 -p stopped-upgrade-300387 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.745403128 -p stopped-upgrade-300387 stop: (2.045606065s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-300387 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-300387 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.367511945s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.63s)

                                                
                                    
x
+
TestPause/serial/Start (85.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-226333 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-226333 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m25.725430901s)
--- PASS: TestPause/serial/Start (85.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-761631 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-761631 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (675.795788ms)

                                                
                                                
-- stdout --
	* [false-761631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:54:24.285305   57636 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:54:24.285626   57636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:54:24.285637   57636 out.go:358] Setting ErrFile to fd 2...
	I1026 01:54:24.285655   57636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:54:24.285851   57636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-8680/.minikube/bin
	I1026 01:54:24.286421   57636 out.go:352] Setting JSON to false
	I1026 01:54:24.287415   57636 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5804,"bootTime":1729901860,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 01:54:24.287516   57636 start.go:139] virtualization: kvm guest
	I1026 01:54:24.289766   57636 out.go:177] * [false-761631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1026 01:54:24.291387   57636 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:54:24.291401   57636 notify.go:220] Checking for updates...
	I1026 01:54:24.293910   57636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:54:24.295272   57636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-8680/kubeconfig
	I1026 01:54:24.296584   57636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-8680/.minikube
	I1026 01:54:24.298021   57636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 01:54:24.299469   57636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:54:24.301264   57636 config.go:182] Loaded profile config "kubernetes-upgrade-970804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1026 01:54:24.301444   57636 config.go:182] Loaded profile config "pause-226333": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1026 01:54:24.301577   57636 config.go:182] Loaded profile config "stopped-upgrade-300387": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1026 01:54:24.301689   57636 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:54:24.897196   57636 out.go:177] * Using the kvm2 driver based on user configuration
	I1026 01:54:24.898906   57636 start.go:297] selected driver: kvm2
	I1026 01:54:24.898927   57636 start.go:901] validating driver "kvm2" against <nil>
	I1026 01:54:24.898946   57636 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:54:24.901154   57636 out.go:201] 
	W1026 01:54:24.902245   57636 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 01:54:24.903455   57636 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-761631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-761631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761631"

                                                
                                                
----------------------- debugLogs end: false-761631 [took: 3.259488457s] --------------------------------
helpers_test.go:175: Cleaning up "false-761631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-761631
--- PASS: TestNetworkPlugins/group/false (4.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-300387
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093148 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093148 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m45.96557297s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-226333 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-226333 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.898046515s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-226333 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-226333 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-226333 --output=json --layout=cluster: exit status 2 (232.607079ms)

                                                
                                                
-- stdout --
	{"Name":"pause-226333","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-226333","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-226333 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-226333 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-226333 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1026 01:56:37.285214   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (55.092771046s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34789ee5-dad1-4115-b92d-39279ef3891c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [34789ee5-dad1-4115-b92d-39279ef3891c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005312882s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-093148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-093148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102998192s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-093148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767480 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cc5c98c7-431f-4722-8c46-33dafff2a3c0] Pending
helpers_test.go:344: "busybox" [cc5c98c7-431f-4722-8c46-33dafff2a3c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cc5c98c7-431f-4722-8c46-33dafff2a3c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003531524s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-767480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-767480 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (620.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093148 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093148 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m20.51058101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093148 -n no-preload-093148
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (620.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (556.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m15.804218429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767480 -n embed-certs-767480
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (556.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-385716 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-385716 --alsologtostderr -v=3: (5.305654819s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-385716 -n old-k8s-version-385716: exit status 7 (63.148512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-385716 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-661357 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1026 02:11:37.284412   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-661357 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m21.300265827s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c9b0d313-34c5-4a3b-9172-ea1015817010] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c9b0d313-34c5-4a3b-9172-ea1015817010] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003150997s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-661357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-661357 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (579.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-661357 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-661357 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m39.115444226s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-661357 -n default-k8s-diff-port-661357
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (579.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-274222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-274222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (48.722119258s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.977088819s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-274222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-274222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06128309s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-274222 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-274222 --alsologtostderr -v=3: (7.351317184s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274222 -n newest-cni-274222
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274222 -n newest-cni-274222: exit status 7 (63.545416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-274222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-274222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-274222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (37.769712968s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274222 -n newest-cni-274222
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-274222 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-274222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274222 -n newest-cni-274222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274222 -n newest-cni-274222: exit status 2 (235.362503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274222 -n newest-cni-274222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274222 -n newest-cni-274222: exit status 2 (234.846388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-274222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274222 -n newest-cni-274222
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274222 -n newest-cni-274222
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.841277616s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1026 02:26:37.284312   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m38.12268305s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-761631 "pgrep -a kubelet"
I1026 02:26:47.309760   17615 config.go:182] Loaded profile config "auto-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qmdzk" [c935f02e-9b64-4db8-9680-8cc566ab3ea1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 02:26:55.374871   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:55.381282   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:55.392736   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:55.414212   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:55.455609   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:55.537150   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qmdzk" [c935f02e-9b64-4db8-9680-8cc566ab3ea1] Running
E1026 02:26:55.699254   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:56.020970   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:56.034384   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:56.663326   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:26:57.944612   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:27:00.506574   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.00416725s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.67504266s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tl2mj" [f4035d1a-f374-4c40-99ef-4a21663092df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005526215s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-761631 "pgrep -a kubelet"
I1026 02:27:27.834166   17615 config.go:182] Loaded profile config "kindnet-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z5vfz" [ebc9c689-e3ea-4ceb-a075-036732e195a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z5vfz" [ebc9c689-e3ea-4ceb-a075-036732e195a3] Running
E1026 02:27:36.351517   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.077030782s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (55.74609225s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ckxkv" [e85a71f1-b107-4f03-b2cd-1ab426e58f85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00569246s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-761631 "pgrep -a kubelet"
I1026 02:28:07.309881   17615 config.go:182] Loaded profile config "calico-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-761631 replace --force -f testdata/netcat-deployment.yaml
I1026 02:28:07.595846   17615 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z44gt" [9290f199-450f-47ee-8e69-ebf929812d63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 02:28:10.882015   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:10.888465   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:10.899860   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:10.921324   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:10.962762   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:11.044359   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:11.205847   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:11.527560   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:12.169828   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:13.451839   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-z44gt" [9290f199-450f-47ee-8e69-ebf929812d63] Running
E1026 02:28:16.013477   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:28:17.313764   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004160154s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-761631 "pgrep -a kubelet"
I1026 02:28:29.211614   17615 config.go:182] Loaded profile config "custom-flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pvpvj" [2d40e449-4bd5-4ceb-b3d0-e541e9704ac7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 02:28:31.376449   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pvpvj" [2d40e449-4bd5-4ceb-b3d0-e541e9704ac7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004439551s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.090234452s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-761631 "pgrep -a kubelet"
I1026 02:28:52.345848   17615 config.go:182] Loaded profile config "enable-default-cni-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9j8r7" [7a89e3e1-35dd-4aa0-a927-d0e6061210f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 02:28:52.961496   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9j8r7" [7a89e3e1-35dd-4aa0-a927-d0e6061210f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004392434s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-761631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m33.355635434s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6n456" [3b980a28-6750-42ef-b4b2-5e2578b6fb72] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00375507s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-761631 "pgrep -a kubelet"
I1026 02:29:53.040662   17615 config.go:182] Loaded profile config "flannel-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wzgzl" [cc6c4743-7a8e-4fe7-b9af-f865a1f5f047] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wzgzl" [cc6c4743-7a8e-4fe7-b9af-f865a1f5f047] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004249554s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-761631 "pgrep -a kubelet"
I1026 02:30:30.383316   17615 config.go:182] Loaded profile config "bridge-761631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-761631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dn855" [8b8fa33f-66b9-49af-b232-2722da350070] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dn855" [8b8fa33f-66b9-49af-b232-2722da350070] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003450165s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-761631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-761631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1026 02:30:54.742450   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:37.284572   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/addons-602145/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.595307   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.601688   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.613066   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.634559   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.676502   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.758003   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:47.919554   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:48.241283   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:48.883289   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:50.164810   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:52.726108   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:55.375823   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:31:57.847996   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:08.090108   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.599252   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.605611   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.616939   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.639089   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.680468   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.761862   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:21.923418   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:22.245128   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:22.886987   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:23.076646   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/no-preload-093148/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:24.168827   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:26.730845   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:28.571587   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:31.853101   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:32:42.094690   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.033494   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.039913   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.051309   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.072714   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.114128   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.195575   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.357104   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:01.678944   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:02.320991   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:02.576836   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:03.603286   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:06.164747   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:09.533795   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/auto-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:10.882285   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:11.286226   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:21.528433   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.468018   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.474381   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.485733   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.507096   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.548555   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.629986   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:29.791345   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:30.113061   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:30.754731   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:32.036971   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:34.598715   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:38.584754   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/old-k8s-version-385716/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:39.720573   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:42.010549   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/calico-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:43.539132   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/kindnet-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:49.962879   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/custom-flannel-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.557821   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.564170   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.575507   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.596842   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.638162   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.719585   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.881097   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:52.961547   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/functional-335050/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:53.203139   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:53.844875   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:55.126251   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:33:57.688357   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"
E1026 02:34:02.810292   17615 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-8680/.minikube/profiles/enable-default-cni-761631/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (39/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
269 TestStartStop/group/disable-driver-mounts 0.14
275 TestNetworkPlugins/group/kubenet 2.89
283 TestNetworkPlugins/group/cilium 5.32
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-602145 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-713871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-713871
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-761631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-761631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761631"

                                                
                                                
----------------------- debugLogs end: kubenet-761631 [took: 2.737309021s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-761631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-761631
--- SKIP: TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-761631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-761631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-761631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-761631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761631"

                                                
                                                
----------------------- debugLogs end: cilium-761631 [took: 5.157187288s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-761631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-761631
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
Copied to clipboard